AI Is Trained to Avoid These Three Words That Are Essential to Learning


A glowing, futuristic central processing unit (CPU) or AI core, radiating blue light and surrounded by complex circuit board patterns. Three prominent red shield icons, each with a diagonal 'no' symbol crossing through it, are positioned around the core. Inside these shields are the words "WHY," "HOW," and "IMAGINE" in bold white text, signifying that these concepts are blocked or avoided. The overall background is dark and digital, with streams of binary code and data flowing. Image (and typos) generated by Nano Banana.
A critical new analysis reveals that current AI training protocols are designed to avoid the use of three words—”why,” “how,” and “imagine”—which are fundamental to human learning, critical thinking, and creativity. This raises significant questions about the depth of understanding and innovation possible with AI. Image (and typos) generated by Nano Banana.

Source

Education Week

Summary

Sam Wineburg and Nadav Ziv argue that artificial intelligence, by design, avoids the phrase “I don’t know,” a trait that undermines the essence of learning. Drawing on OpenAI’s research, they note that chatbots are penalised for expressing uncertainty and rewarded for confident—but often incorrect—answers. This, they contend, clashes with educational goals that value questioning, evidence-weighing, and intellectual humility. The authors caution educators to slow the rush to integrate AI into classrooms without teaching critical evaluation. Instead of treating AI as a source of truth, students must learn to interrogate it—asking for sources, considering evidence, and recognising ambiguity. True learning, they write, depends on curiosity and the courage to admit what one does not know.

Key Points

  • Chatbots are trained to eliminate uncertainty, prioritising fluency over accuracy.
  • Students and adults often equate confident answers with credible information.
  • AI risks promoting surface-level understanding and discouraging critical inquiry.
  • Educators should model scepticism, teaching students to source and question AI outputs.
  • Learning thrives on doubt and reflection—qualities AI currently suppresses.

Keywords

URL

https://www.edweek.org/technology/opinion-ai-is-trained-to-avoid-these-3-words-that-are-essential-to-learning/2025/10

Summary generated by ChatGPT 5


Study finds ChatGPT-5 is wrong about 1 in 4 times – here’s the reason why


A digital, transparent screen is displaying a large, bold '25%' error rate in red, with a smaller '1 in 4' below it. A glowing blue icon of a brain with a gear inside is shown on the left, while a red icon of a corrupted data symbol is on the right. A researcher in a lab coat is looking at the screen with a concerned expression.  The scene visually represents the unreliability of a generative AI. Generated by Nano Banana.
While generative AI tools like ChatGPT-5 offer powerful capabilities, new research highlights a critical finding: they can be wrong as often as one in four times. This image captures the tension between AI’s potential and its inherent fallibility, underscoring the vital need for human oversight and fact-checking to ensure accuracy and reliability. Image (and typos) generated by Nano Banana.

Source

Tom’s Guide

Summary

A recent OpenAI study finds that ChatGPT-5 produces incorrect answers in roughly 25 % of cases, due largely to the training and evaluation frameworks that penalise uncertainty. Because benchmarks reward confident statements over “I don’t know,” models are biased to give plausible answers even when unsure. More sophisticated reasoning models tend to hallucinate more because they generate more claims. The authors propose shifting evaluation metrics to reward calibrated uncertainty, rather than penalising honesty, to reduce harmful misinformation in critical domains.

Key Points

  • ChatGPT-5 is wrong about one in four responses (~25%).
  • Training and benchmarking systems currently penalise hesitation, nudging models toward confident guesses even when inaccurate.
  • Reasoning-focused models like o3 and o4-mini hallucinate more often—they make more claims, so they have more opportunities to err.
  • The study recommends redesigning AI benchmarks to reward calibrated uncertainty and the ability to defer instead of guessing.
  • For users: treat AI output as tentative and verify especially in high-stakes domains (medicine, law, finance).

Keywords

URL

https://www.tomsguide.com/ai/study-finds-chatgpt-5-is-wrong-about-1-in-4-times-heres-the-reason-why

Summary generated by ChatGPT 5