Skip to main content
Logo for Everyday AI

Everyday AI

National Science Foundation (NSF) logo

Visitor menu

  • Join
  • About Us
  • Help
  • Contact
  • Log in

Main navigation

  • Discussions
  • Resources
  • News
  • Events

Study reveals why AI models that analyze medical images can be biased

x-ray
Posted July 2, 2024 by Kate Moore

In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays; that research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps.” The findings suggest that these models may be using “demographic shortcuts” when making their diagnosti

Why Do AI Projects Fail?

double question mark
Posted May 15, 2024 by Kate Moore

Gartner, HBR estimates up to 85% AI projects fail before or after deployment, which is double the rate for software.

Responsible AI and Tech Justice: A Guide for K-12 Education

Kapor Report
Posted February 21, 2024 by Kate Moore

This report, from the KAPOR Foundation, offers six core components serve as a guide for "educators, parents, policymakers, and advocates seeking to design learning experiences across educational settings, where both the critical interrogation of technologies and the disruption and creation of more ethical and equitable solutions are prioritized." Read the details and full report behind this executive summary

Critical Thinking and Ethics in the Age of Generative AI in Education

GenAI
Posted February 21, 2024 by Kate Moore

This report, from the USCCenter for Generative AI & Society, focuses on the potential of generative AI technologies to transform educational spaces, as well as the implications of this emerging technology in the future of education and society. Sections touch on, the use of AI and generative AI in college and K-12 classrooms as well as the ethics in using these tools in these spaces.

Air Canada must honor refund policy invented by airline’s chatbot

Air Canada
Posted February 21, 2024 by Kate Moore

"Air Canada must honor refund policy invented by airline’s chatbot." The chatbot provided inaccurate information to a customer, encouraging the customer to make a purchase for bereavement travel, then use Air Canada's bereavement travel refund policy to receive a reimbursement. Unfortunately, Air Canada does not have such a policy. It was an AI hallucination. In legal trials, Air Canada argued that the client should have known better than to trust a chatbot.

Generative AI and the Implications for Science Communication

Melanie Mitchell shares
Posted November 8, 2023 by Kate Moore

New innovations in large language modeling and other generative artificial intelligence tools, such as ChatGPT, have spurred debate about their capacities, risks, and broader societal implications.

How I accidentally became a fierce critic of AI

Joy Buolamwini
Posted October 29, 2023 by Kate Moore

"Once I was enamored with the promise of artificial intelligence. But then I noticed the huge holes in its view of the world."

Humans Absorb Bias from AI—And Keep It after They Stop Using the Algorithm

Humans and AI
Posted October 29, 2023 by Kate Moore

Artificial intelligence programs, like the humans who develop and train them, are far from perfect. AI can display biases that get introduced through the massive data troves that these programs are trained on—and that are indetectable to many users.

FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions

Phantom
Posted October 29, 2023 by Kate Moore

Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity. This team introduces FANToM 👻, a new benchmark designed to stress-test ToM within information-asymmetric conversational contexts via question answering. The authors show how the benchmark draws upon important theoretical requisites from psychology and necessary empirical considerations when evaluating large language models (LLMs).

AI 101 for Teachers Professional Learning Series

AI 101 for Teachers Website
Posted October 27, 2023 by Kate Moore

A new professional learning series -- AI 101 for Teachers (https://code.org/ai/pl/101). Led by Code.org, ETS, ISTE, and Khan Academy. It is a five-part video series. 

Article - Maybe We Will Finally Learn More About How A.I. Works

The AI Index
Posted October 19, 2023 by Kate Moore

Stanford researchers have ranked 10 major A.I. models on how openly they operate. See the NYtimes article and the Foundation Model Transparency Index for more information.  

October Webinar

October Webinar
Posted October 14, 2023 by Kate Moore

We had a great October Webinar! If you missed it, feel free to check-out the slide deck and add your comments to the video!

September Webinar

September Webinar
Posted October 14, 2023 by Kate Moore

We had a great September Webinar! If you missed it, feel free to check-out the slide deck and add your comments to the video!  

Unless otherwise licensed, all works accessible here are licensed under a Creative Commons Attribution-ShareAlike 4.0 International License (CC-BY-NC). Everyday AI is partially funded by the National Science Foundation.

 

Everyday-AI.org is the online community network for teachers who use and contribute to the DAILy curriculum.

Learn more!

What's news categories.

What's news categories.

  • Professional Development (2)

Upcoming events

There are no results.
More