Kate Moore
4.4 Investigating LLMs
In this lesson, students interact directly with several LLMs to evaluate model performance and think critically about implications of model output for their work, their peers, and for society.
4.3 Prompting
In this lesson, students interact directly with several LLMs to explore differences in model output and performance. Through a guided investigation of LLM responses to different prompts, students learn that LLM output is 1) a probabilistic (answers are different nearly every time) and 2) not based on ground truth. They also learn to build different prompts or prompt frameworks for different purposes.
4.2 Fine Tuning
In this lesson, students do not use LLMs. Instead, then engage in playful, hands-on/unplugged activities that focus on the crucial stage of fine-tuning LLMs, highlighting the human influence on their behavior and output.
4.1 Pre-training
In this lesson, students do not use LLMs. Instead, then engage in playful, hands-on/unplugged activities that aim to demystify how LLMs learn language by exploring concepts like tokenization, vectors, and attention mechanisms. It consists of three activities, each building upon the previous one.
4.0 Intro to LLMs Lesson
In this lesson, students ≥13 use LLMs (i.e., chatGPT, Gemini, etc.) (teachers demo for students <13) to engage in a series of investigations that reveal how different LLMs can output misleading and/or biased information. By exploring and discussing these limitations, learners realize that biased output can be the result of patterns from the data, “hard-coded” or programmed directives from the developers, or personal bias.
Everyday AI Facilitator Guide
This document is a guide for facilitators leading versions of the Everyday AI (EdAI) teacher professional development (PD) experience. It is also for anyone thinking about or interested in teacher PD for artificial intelligence (AI) literacy.