Students using ChatGPT beware: Real learning takes legwork, study finds


split image illustrating two contrasting study methods. On the left, a student in a blue-lit setting uses a laptop for "SHORT-CUT LEARNING" with "EASY ANSWERS" floating around. On the right, a student in a warm, orange-lit setting is engaged in "REAL LEGWORK LEARNING," writing in a notebook with open books and calculations. A large question mark divides the two scenes. Image (and typos) generated by Nano Banana.
The learning divide: A visual comparison highlights the potential pitfalls of relying on AI for “easy answers” versus the proven benefits of diligent study and engagement, as a new study suggests. Image (and typos) generated by Nano Banana.

Source

The Register

Summary

A new study published in PNAS Nexus finds that people who rely on ChatGPT or similar AI tools for research develop shallower understanding compared with those who gather information manually. Conducted by researchers from the University of Pennsylvania’s Wharton School and New Mexico State University, the study involved over 10,000 participants. Those using AI-generated summaries retained fewer facts, demonstrated less engagement, and produced advice that was shorter, less original, and less trustworthy. The findings reinforce concerns that overreliance on AI can “deskill” learners by replacing active effort with passive consumption. The researchers conclude that AI should support—not replace—critical thinking and independent study.

Key Points

  • Study of 10,000 participants compared AI-assisted and traditional research.
  • AI users showed shallower understanding and less factual recall.
  • AI summaries led to homogenised, less trustworthy responses.
  • Overreliance on AI risks reducing active learning and cognitive engagement.
  • Researchers recommend using AI as a support tool, not a substitute.

Keywords

URL

https://www.theregister.com/2025/11/03/chatgpt_real_understanding/

Summary generated by ChatGPT 5


Academic Libraries Embrace AI


A grand, traditional academic library transformed with futuristic technology. Holographic interfaces displaying data and robotic arms extend from bookshelves. Students use laptops and VR headsets, while a central figure at a desk oversees a glowing AI monolith, symbolizing the integration of AI. Image (and typos) generated by Nano Banana.
The future of learning: Academic libraries are evolving into hubs where traditional knowledge meets cutting-edge AI, enhancing research and access to information. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

A global Clarivate survey of more than 2,000 librarians across 109 countries shows that artificial intelligence adoption in libraries is accelerating, particularly within academic institutions. Sixty-seven percent of libraries are exploring or implementing AI, up from 63 percent in 2024, with academic libraries leading the trend. Their priorities include supporting student learning and improving content discovery. Libraries that provide AI training, resources, and leadership encouragement report the highest success and optimism. However, adoption and attitudes vary sharply by region—U.S. librarians remain the least optimistic—and by seniority, with senior leaders expressing greater confidence and favouring administrative applications.

Key Points

  • 67% of libraries are exploring or using AI, up from 63% in 2024.
  • Academic libraries lead in adoption, focusing on student engagement and learning.
  • AI training and institutional support drive successful implementation.
  • Regional differences persist, with U.S. librarians least optimistic (7%).
  • Senior librarians show higher confidence and prefer AI for administrative efficiency.

Keywords

URL

https://www.insidehighered.com/news/quick-takes/2025/10/31/academic-libraries-embrace-ai

Summary generated by ChatGPT 5


Where Does Human Thinking End and AI Begin? An AI Authorship Protocol Aims to Show the Difference


A split image contrasting human and AI cognitive processes. On the left, a woman writes, surrounded by concepts like "HUMAN INTUITION" and "ORIGINAL THOUGHT." On the right, a man works at a computer, with "AI GENERATION" and "COMPUTATIONAL LOGIC" displayed. A central vertical bar indicates an "AUTHORSHIP PROTOCOL: 60% HUMAN / 40% AI." Image (and typos) generated by Nano Banana.
Decoding authorship: A visual representation of the intricate boundary between human creativity and AI generation, highlighting the need for protocols to delineate their contributions. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Eli Alshanetsky, a philosophy professor at Temple University, warns that as AI-generated writing grows increasingly polished, the link between human reasoning and authorship is at risk of dissolving. To preserve academic and professional integrity, his team is piloting an “AI authorship protocol” that verifies human engagement during the creative process without resorting to surveillance or detection. The system embeds real-time reflective prompts and produces a secure “authorship tag” confirming that work aligns with specified AI-use rules. Alshanetsky argues this approach could serve as a model for ensuring accountability and trust across education, publishing, and professional fields increasingly shaped by AI.

Key Points

  • Advanced AI threatens transparency around human thought in writing and decision-making.
  • A new authorship protocol links student output to authentic reasoning.
  • The system uses adaptive AI prompts and verification tags to confirm engagement.
  • It avoids intrusive monitoring by building AI-use terms into the submission process.
  • The model could strengthen trust in professions dependent on human judgment.

Keywords

URL

https://theconversation.com/where-does-human-thinking-end-and-ai-begin-an-ai-authorship-protocol-aims-to-show-the-difference-266132

Summary generated by ChatGPT 5


This Professor Let Half His Class Use AI. Here’s What Happened


A split classroom scene with a professor in the middle, presenting data. The left side, labeled "GROUP A: WITH AI," shows disengaged students with "F" grades. The right side, labeled "GROUP B: NO AI," shows engaged students with "A+" grades, depicting contrasting outcomes of AI use in a classroom experiment. Image (and typos) generated by Nano Banana.
An academic experiment unfolds: Visualizing the stark differences in engagement and performance between students who used AI and those who did not, as observed by one professor. Image (and typos) generated by Nano Banana.

Source

Gizmodo

Summary

A study by University of Massachusetts Amherst professor Christian Rojas compared two sections of the same advanced economics course—one permitted structured AI use, the other did not. The results revealed that allowing AI under clear guidelines improved student engagement, confidence, and reflective learning but did not affect exam performance. Students with AI access reported greater efficiency and satisfaction with course design while developing stronger habits of self-correction and critical evaluation of AI outputs. Rojas concludes that carefully scaffolded AI integration can enrich learning experiences without fostering dependency or academic shortcuts, though larger studies are needed.

Key Points

  • Structured AI use increased engagement and confidence but not exam scores.
  • Students used AI for longer, more focused sessions and reflective learning.
  • Positive perceptions grew regarding efficiency and instructor quality.
  • AI integration encouraged editing, critical thinking, and ownership of ideas.
  • Researchers stress that broader trials are required to validate results.

Keywords

URL

https://gizmodo.com/this-professor-let-half-his-class-use-ai-heres-what-happened-2000678960

Summary generated by ChatGPT 5


AI: Are we empowering students – or outsourcing the skills we aim to cultivate?


A stark split image contrasting two outcomes of AI in education, divided by a jagged white lightning bolt. The left side shows a diverse group of three enthusiastic students working collaboratively on laptops, with one student raising their hands in excitement. Above them, a vibrant, glowing display of keywords like "CRITICAL THINKING," "CREATIVITY," and "COLLABORATION" emanates, surrounded by data and positive learning metrics. The right side shows a lone, somewhat disengaged male student working on a laptop, with a large, menacing robotic hand hovering above him. The robot hand has glowing red lights and is connected to a screen filled with complex, auto-generated data, symbolizing the automation of tasks and potential loss of human skills. Image (and typos) generated by Nano Banana.
The rise of AI in education presents a crucial dichotomy: are we using it to truly empower students and cultivate essential skills, or are we inadvertently outsourcing those very abilities to algorithms? This image visually explores the two potential paths for AI’s integration into learning, urging a thoughtful approach to its implementation. Image (and typos) generated by Nano Banana.

Source

The Irish Times

Summary

Jean Noonan reflects on the dual role of artificial intelligence in higher education—its capacity to empower learning and its risk of eroding fundamental human skills. As AI becomes embedded in teaching, research, and assessment, universities must balance innovation with integrity. AI literacy, she argues, extends beyond technical skills to include ethics, empathy, and critical reasoning. While AI enhances accessibility and personalised learning, over-reliance may weaken originality and authorship. Noonan calls for assessment redesigns that integrate AI responsibly, enabling students to learn with AI rather than be replaced by it. Collaboration between academia, industry, and policymakers is essential to ensure education cultivates judgment, creativity, and moral awareness. Echoing Orwell’s warning in 1984, she concludes that AI should enhance, not diminish, the intellectual and linguistic richness that defines human learning.

Key Points

  • AI literacy must combine technical understanding with ethics, empathy, and reflection.
  • Universities are rapidly adopting AI but risk outsourcing creativity and independent thought.
  • Over-reliance on AI tools can blur authorship and weaken critical engagement.
  • Assessment design should promote ethical AI use and active, independent learning.
  • Collaboration between universities and industry can align innovation with responsible practice.
  • Education must ensure AI empowers rather than replaces essential human skills.

Keywords

URL

https://www.irishtimes.com/ireland/education/2025/10/29/ai-are-we-empowering-students-or-outsourcing-the-skills-we-aim-to-cultivate/

Summary generated by ChatGPT 5