Students using ChatGPT beware: Real learning takes legwork, study finds


split image illustrating two contrasting study methods. On the left, a student in a blue-lit setting uses a laptop for "SHORT-CUT LEARNING" with "EASY ANSWERS" floating around. On the right, a student in a warm, orange-lit setting is engaged in "REAL LEGWORK LEARNING," writing in a notebook with open books and calculations. A large question mark divides the two scenes. Image (and typos) generated by Nano Banana.
The learning divide: A visual comparison highlights the potential pitfalls of relying on AI for “easy answers” versus the proven benefits of diligent study and engagement, as a new study suggests. Image (and typos) generated by Nano Banana.

Source

The Register

Summary

A new study published in PNAS Nexus finds that people who rely on ChatGPT or similar AI tools for research develop shallower understanding compared with those who gather information manually. Conducted by researchers from the University of Pennsylvania’s Wharton School and New Mexico State University, the study involved over 10,000 participants. Those using AI-generated summaries retained fewer facts, demonstrated less engagement, and produced advice that was shorter, less original, and less trustworthy. The findings reinforce concerns that overreliance on AI can “deskill” learners by replacing active effort with passive consumption. The researchers conclude that AI should support—not replace—critical thinking and independent study.

Key Points

  • Study of 10,000 participants compared AI-assisted and traditional research.
  • AI users showed shallower understanding and less factual recall.
  • AI summaries led to homogenised, less trustworthy responses.
  • Overreliance on AI risks reducing active learning and cognitive engagement.
  • Researchers recommend using AI as a support tool, not a substitute.

Keywords

URL

https://www.theregister.com/2025/11/03/chatgpt_real_understanding/

Summary generated by ChatGPT 5


This Professor Let Half His Class Use AI. Here’s What Happened


A split classroom scene with a professor in the middle, presenting data. The left side, labeled "GROUP A: WITH AI," shows disengaged students with "F" grades. The right side, labeled "GROUP B: NO AI," shows engaged students with "A+" grades, depicting contrasting outcomes of AI use in a classroom experiment. Image (and typos) generated by Nano Banana.
An academic experiment unfolds: Visualizing the stark differences in engagement and performance between students who used AI and those who did not, as observed by one professor. Image (and typos) generated by Nano Banana.

Source

Gizmodo

Summary

A study by University of Massachusetts Amherst professor Christian Rojas compared two sections of the same advanced economics course—one permitted structured AI use, the other did not. The results revealed that allowing AI under clear guidelines improved student engagement, confidence, and reflective learning but did not affect exam performance. Students with AI access reported greater efficiency and satisfaction with course design while developing stronger habits of self-correction and critical evaluation of AI outputs. Rojas concludes that carefully scaffolded AI integration can enrich learning experiences without fostering dependency or academic shortcuts, though larger studies are needed.

Key Points

  • Structured AI use increased engagement and confidence but not exam scores.
  • Students used AI for longer, more focused sessions and reflective learning.
  • Positive perceptions grew regarding efficiency and instructor quality.
  • AI integration encouraged editing, critical thinking, and ownership of ideas.
  • Researchers stress that broader trials are required to validate results.

Keywords

URL

https://gizmodo.com/this-professor-let-half-his-class-use-ai-heres-what-happened-2000678960

Summary generated by ChatGPT 5


Is Increasing Use of AI Damaging Students’ Learning Ability?


A split image contrasting two groups of students in a classroom. On the left, a blue-lit side represents "COGNITIVE DECAY" with students passively looking at laptops receiving "EASY ANSWERS." On the right, an orange-lit side represents "CRITICAL THINKING" and "CREATIVITY" with students actively collaborating and working. Image (and typos) generated by Nano Banana.
A critical question posed: Does the growing reliance on AI lead to cognitive decay, or can it be harnessed to foster critical thinking and creativity in students? Image (and typos) generated by Nano Banana.

Source

Radio New Zealand (RNZ) – Nine to Noon

Summary

University of Auckland professor Alex Sims examines whether the growing integration of artificial intelligence in classrooms and lecture halls enhances or impedes student learning. Drawing on findings from an MIT neuroscience study and an Oxford University report, Sims highlights both the cognitive effects of AI use and students’ own accounts of its impact on motivation and understanding. The research suggests that while AI tools can aid efficiency, overreliance may disrupt the brain processes central to deep learning and independent reasoning. The discussion raises questions about how to balance technological innovation with the preservation of critical thinking and sustained attention.

Key Points

  • AI use in education is expanding rapidly across levels and disciplines.
  • MIT research explores how AI affects neural activity linked to learning.
  • Oxford report includes students’ perceptions of AI’s influence on study habits.
  • Benefits include efficiency; risks include reduced cognitive engagement.
  • Experts urge educators to maintain a balance between AI support and active learning.

Keywords

URL

https://www.rnz.co.nz/national/programmes/ninetonoon/audio/2019010577/is-increasing-use-of-ai-damaging-students-learning-ability

Summary generated by ChatGPT 5


Guidance on Artificial Intelligence in Schools


Source

Department of Education and Youth & Oide Technology in Education, October 2025

Summary

This national guidance document provides Irish schools with a framework for the safe, ethical, and effective use of artificial intelligence (AI), particularly generative AI (GenAI), in teaching, learning, and school leadership. It aims to support informed decision-making, enhance digital competence, and align AI use with Ireland’s Digital Strategy for Schools to 2027. The guidance recognises AI’s potential to support learning design, assessment, and communication while emphasising human oversight, teacher professionalism, and data protection.

It presents a balanced view of benefits and risks—AI can personalise learning and streamline administration but also raises issues of bias, misinformation, data privacy, and environmental impact. The report introduces a 4P framework—Purpose, Planning, Policies, and Practice—to guide schools in integrating AI responsibly. Teachers are encouraged to use GenAI as a creative aid, not a substitute, and to embed AI literacy in curricula. The document stresses the need for ethical awareness, alignment with GDPR and the EU AI Act (2024), and continuous policy updates as technology evolves.

Key Points

  • AI should support, not replace, human-led teaching and learning.
  • Responsible use requires human oversight, verification, and ethical reflection.
  • AI literacy for teachers, students, and leaders is central to safe adoption.
  • Compliance with GDPR and the EU AI Act ensures privacy and transparency.
  • GenAI tools must be age-appropriate and used within consent frameworks.
  • Bias, misinformation, and “hallucinations” demand critical human review.
  • The 4P Approach (Purpose, Planning, Policies, Practice) structures school-level implementation.
  • Environmental and wellbeing impacts must be considered in AI use.
  • Collaboration between the Department, Oide, and schools underpins future updates.
  • Guidance will be continuously revised to reflect evolving practice and research.

Conclusion

The guidance frames AI as a powerful but high-responsibility tool in education. By centring ethics, human agency, and data protection, schools can harness AI’s potential while safeguarding learners’ wellbeing, trust, and equity. Its iterative, values-led approach ensures Ireland’s education system remains adaptive, inclusive, and future-ready.

Keywords

URL

https://assets.gov.ie/static/documents/dee23cad/Guidance_on_Artificial_Intelligence_in_Schools_25.pdf

Summary generated by ChatGPT 5


AI Is Trained to Avoid These Three Words That Are Essential to Learning


A glowing, futuristic central processing unit (CPU) or AI core, radiating blue light and surrounded by complex circuit board patterns. Three prominent red shield icons, each with a diagonal 'no' symbol crossing through it, are positioned around the core. Inside these shields are the words "WHY," "HOW," and "IMAGINE" in bold white text, signifying that these concepts are blocked or avoided. The overall background is dark and digital, with streams of binary code and data flowing. Image (and typos) generated by Nano Banana.
A critical new analysis reveals that current AI training protocols are designed to avoid the use of three words—”why,” “how,” and “imagine”—which are fundamental to human learning, critical thinking, and creativity. This raises significant questions about the depth of understanding and innovation possible with AI. Image (and typos) generated by Nano Banana.

Source

Education Week

Summary

Sam Wineburg and Nadav Ziv argue that artificial intelligence, by design, avoids the phrase “I don’t know,” a trait that undermines the essence of learning. Drawing on OpenAI’s research, they note that chatbots are penalised for expressing uncertainty and rewarded for confident—but often incorrect—answers. This, they contend, clashes with educational goals that value questioning, evidence-weighing, and intellectual humility. The authors caution educators to slow the rush to integrate AI into classrooms without teaching critical evaluation. Instead of treating AI as a source of truth, students must learn to interrogate it—asking for sources, considering evidence, and recognising ambiguity. True learning, they write, depends on curiosity and the courage to admit what one does not know.

Key Points

  • Chatbots are trained to eliminate uncertainty, prioritising fluency over accuracy.
  • Students and adults often equate confident answers with credible information.
  • AI risks promoting surface-level understanding and discouraging critical inquiry.
  • Educators should model scepticism, teaching students to source and question AI outputs.
  • Learning thrives on doubt and reflection—qualities AI currently suppresses.

Keywords

URL

https://www.edweek.org/technology/opinion-ai-is-trained-to-avoid-these-3-words-that-are-essential-to-learning/2025/10

Summary generated by ChatGPT 5