Not piloted with students
4.4 Investigating LLMs
In this lesson, students interact directly with several LLMs to evaluate model performance and think critically about implications of model output for their work, their peers, and for society.
4.3 Prompting
In this lesson, students interact directly with several LLMs to explore differences in model output and performance. Through a guided investigation of LLM responses to different prompts, students learn that LLM output is 1) a probabilistic (answers are different nearly every time) and 2) not based on ground truth. They also learn to build different prompts or prompt frameworks for different purposes.
4.2 Fine Tuning
In this lesson, students do not use LLMs. Instead, then engage in playful, hands-on/unplugged activities that focus on the crucial stage of fine-tuning LLMs, highlighting the human influence on their behavior and output.
4.1 Pre-training
In this lesson, students do not use LLMs. Instead, then engage in playful, hands-on/unplugged activities that aim to demystify how LLMs learn language by exploring concepts like tokenization, vectors, and attention mechanisms. It consists of three activities, each building upon the previous one.