
Source
Education Week
Summary
Sam Wineburg and Nadav Ziv argue that artificial intelligence, by design, avoids the phrase “I don’t know,” a trait that undermines the essence of learning. Drawing on OpenAI’s research, they note that chatbots are penalised for expressing uncertainty and rewarded for confident—but often incorrect—answers. This, they contend, clashes with educational goals that value questioning, evidence-weighing, and intellectual humility. The authors caution educators to slow the rush to integrate AI into classrooms without teaching critical evaluation. Instead of treating AI as a source of truth, students must learn to interrogate it—asking for sources, considering evidence, and recognising ambiguity. True learning, they write, depends on curiosity and the courage to admit what one does not know.
Key Points
- Chatbots are trained to eliminate uncertainty, prioritising fluency over accuracy.
- Students and adults often equate confident answers with credible information.
- AI risks promoting surface-level understanding and discouraging critical inquiry.
- Educators should model scepticism, teaching students to source and question AI outputs.
- Learning thrives on doubt and reflection—qualities AI currently suppresses.
Keywords
URL
Summary generated by ChatGPT 5