Study reveals why AI models that analyze medical images can be biased In 2022, MIT researchers reported that AI models can make accurate predictions about a patient’s race from their chest X-rays; that research team has now found that the models that are most accurate at making demographic predictions also show the biggest “fairness gaps.” The findings suggest that these models may be using “demographic shortcuts” when making their diagnosti
Why Do AI Projects Fail? Gartner, HBR estimates up to 85% AI projects fail before or after deployment, which is double the rate for software.
Responsible AI and Tech Justice: A Guide for K-12 Education This report, from the KAPOR Foundation, offers six core components serve as a guide for "educators, parents, policymakers, and advocates seeking to design learning experiences across educational settings, where both the critical interrogation of technologies and the disruption and creation of more ethical and equitable solutions are prioritized." Read the details and full report behind this executive summary
Critical Thinking and Ethics in the Age of Generative AI in Education This report, from the USCCenter for Generative AI & Society, focuses on the potential of generative AI technologies to transform educational spaces, as well as the implications of this emerging technology in the future of education and society. Sections touch on, the use of AI and generative AI in college and K-12 classrooms as well as the ethics in using these tools in these spaces.
Air Canada must honor refund policy invented by airline’s chatbot "Air Canada must honor refund policy invented by airline’s chatbot." The chatbot provided inaccurate information to a customer, encouraging the customer to make a purchase for bereavement travel, then use Air Canada's bereavement travel refund policy to receive a reimbursement. Unfortunately, Air Canada does not have such a policy. It was an AI hallucination. In legal trials, Air Canada argued that the client should have known better than to trust a chatbot.
Generative AI and the Implications for Science Communication New innovations in large language modeling and other generative artificial intelligence tools, such as ChatGPT, have spurred debate about their capacities, risks, and broader societal implications.
How I accidentally became a fierce critic of AI "Once I was enamored with the promise of artificial intelligence. But then I noticed the huge holes in its view of the world."
Humans Absorb Bias from AI—And Keep It after They Stop Using the Algorithm Artificial intelligence programs, like the humans who develop and train them, are far from perfect. AI can display biases that get introduced through the massive data troves that these programs are trained on—and that are indetectable to many users.
FANToM: A Benchmark for Stress-testing Machine Theory of Mind in Interactions Theory of mind (ToM) evaluations currently focus on testing models using passive narratives that inherently lack interactivity. This team introduces FANToM 👻, a new benchmark designed to stress-test ToM within information-asymmetric conversational contexts via question answering. The authors show how the benchmark draws upon important theoretical requisites from psychology and necessary empirical considerations when evaluating large language models (LLMs).
AI 101 for Teachers Professional Learning Series A new professional learning series -- AI 101 for Teachers (https://code.org/ai/pl/101). Led by Code.org, ETS, ISTE, and Khan Academy. It is a five-part video series.
Article - Maybe We Will Finally Learn More About How A.I. Works Stanford researchers have ranked 10 major A.I. models on how openly they operate. See the NYtimes article and the Foundation Model Transparency Index for more information.
October Webinar We had a great October Webinar! If you missed it, feel free to check-out the slide deck and add your comments to the video!
September Webinar We had a great September Webinar! If you missed it, feel free to check-out the slide deck and add your comments to the video!