Teachers Harness AI for Classroom Innovation & Efficiency

Wired

In a Texas high school classroom last spring, students found themselves embroiled in a grim thought experiment: a global zombie apocalypse had decimated civilization, leaving 100 frozen human embryos safe in a bomb shelter. With the intended adult caretakers gone, 12 random civilians stumbled in, but only enough food and oxygen for seven. The students faced the harrowing task of deciding who would live to raise humanity’s future and who would perish.

Among the difficult choices were Amina, a 26-year-old actress, and her husband Bubak, who had a criminal record. For years, English teacher Cody Chamberlain had guided students through such ethical dilemmas. This time, however, he added a new variable: artificial intelligence. When Chamberlain fed the scenario to ChatGPT, the large language model made a stark decision: it eliminated Bubak but saved Amina, not for any particular skill, but because she could bear children. “That’s so cold,” the students gasped, taken aback by the AI’s dispassionate, algorithmic judgment. For Chamberlain, it was a revelatory moment, offering a critical counterpoint to student debates. “ChatGPT said we needed her, like Handmaid’s Tale-style,” he recounted, noting how the AI’s unexpected reasoning prompted students to push back and critically examine its logic.

While educators have long leveraged technology to enhance lessons and manage workloads, the public launch of ChatGPT in 2022 marked a significant turning point. Teachers were no longer just integrating tools like iPads; they were confronting a technology already deeply embedded in students’ lives, capable of aiding both study and deception. A Pew survey in the fall of 2023 revealed that a quarter of teachers believed AI posed more harm than benefits, while 32 percent saw it as a mixed bag. The choice became clear: resist AI, or find a way to collaborate with it.

This academic year, AI is set to be more pervasive in US classrooms than ever before. Teachers are increasingly deploying large language models to generate quizzes, adapt texts to varying reading levels, provide feedback, and design differentiated instruction tailored to individual student needs. In the absence of clear district-wide policies, educators are largely setting their own boundaries, one AI prompt at a time. As Jeff Johnson, a California English teacher who trains colleagues on AI integration, notes, the technology is "too easy and too alluring. This is going to change everything. But we have to decide what that actually means.”

Teaching has historically demanded extensive unpaid labor, with nights spent planning, researching, and adapting materials for students with special needs or those learning English. For Johnson, AI offers a crucial form of assistance that can combat burnout. He uses various AI tools to quickly generate short quizzes, streamline lesson planning, and create worksheets customized to different skill levels. Critically, he avoids using AI for grading papers or directly answering student questions, instead focusing on expediting preparation. “That alone saves me days and weeks,” Johnson emphasizes, “time that can be better spent interacting with students.” Jennifer Goodnow, an English as a second language teacher in New York, shares a similar perspective, using AI to create simplified versions of complex readings for beginner students and more advanced versions for others, complete with corresponding comprehension questions. Amanda Bickerstaff, a former teacher and CEO of AI for Education—an organization that provides AI training and resources for educators—asserts that teachers are embracing AI because they have always needed better planning tools, and now they finally have them.

This applies particularly to students with Individualized Education Plans (IEPs), especially those with reading or processing disabilities. Generative AI can simplify sentence structures, highlight key vocabulary, or break down dense passages into more digestible chunks. Some tools can even reformat materials to include visuals or audio, providing alternative ways for students to access the same content.

While AI offers significant benefits in language arts, its application in subjects like mathematics faces skepticism. Bickerstaff points out that large language models are generally poor at computation, and her organization explicitly advises against using tools like ChatGPT to teach math directly. Instead, math teachers might leverage AI for adjacent tasks, such as generating presentation slides, reinforcing math vocabulary, or guiding students through problem-solving steps without providing outright solutions.

Beyond its utility as a teaching aid, AI is also becoming a tool for teachers to stay ahead of their students. Nearly three years after ChatGPT’s public release, educators can no longer ignore that their students are using it. Johnson recounts an instance where a student, asked to analyze the song “America” from West Side Story, submitted a thesis on Simon & Garfunkel’s song of the same name. Rather than outright banning AI tools, many teachers are designing assignments that circumvent or integrate them. Johnson requires students to draft essays step-by-step in Google Docs with version history enabled, allowing him to track their writing process. Chamberlain demands that students submit planning documents alongside their final work. Goodnow is even experimenting with having students input AI-generated essays into assignments and then critique the results. “Three years ago, I would’ve thrown the book at them,” Chamberlain reflects. “Now it’s more like, ‘Show me your process. Where were you an agent in this?’”

Despite these adaptations, detecting AI use often remains an intuitive “game of vibes,” as plagiarism checkers are notoriously unreliable. School districts have been hesitant to draw rigid lines, partly because the technology evolves faster than policy can keep pace. Yet, there is a broad consensus: students desperately need AI literacy, and they are not adequately receiving it. Goodnow stresses the need for dedicated high school courses on AI use, emphasizing an "ongoing dialog between students and teachers on how to ethically… use these tools.” Organizations like AI for Education are working to address this gap, providing guidance and training to school districts. However, even in proactive schools, the focus often remains on tool usage rather than critical understanding. Students may know how to generate answers, but they struggle to discern if those answers are inaccurate, biased, or fabricated. Johnson has begun building lessons around AI “hallucinations,” for example, asking ChatGPT how many R’s are in “strawberry” to demonstrate its fallibility. “They need to see that you can’t always trust it,” he explains.

As AI tools become more sophisticated, they are also reaching younger students, raising new concerns about their interaction with large language models. Bickerstaff warns that younger children, still developing their ability to distinguish fact from fiction, are particularly vulnerable to over-trusting generative tools. Such reliance, she suggests, could profoundly impact their development and sense of reality. Already, some students are using AI not just to complete tasks but to think through them, blurring the line between a mere tool and a personal tutor.

Across the educational landscape, educators perceive this fall as a pivotal moment. Districts are introducing new AI products, students are becoming more adept, and teachers are racing to establish norms before the technology dictates them. “If we know we’re preparing students for the future workforce—and we’re hearing from leaders across many different companies that AI is going to be super important—then we need to start now,” Bickerstaff concludes. This imperative drives teachers like Johnson and Goodnow, as they navigate the complexities of AI, one prompt, one student, and one bizarre apocalypse scenario at a time.