Questioning the digital degree: AI-generated work is forcing educators to reassess the integrity and perceived value of completion certificates for online courses. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Mohammed Estaiteyeh argues that generative AI has exposed fundamental weaknesses in asynchronous online learning, where instructors cannot observe students’ thinking or verify authorship. Traditional assessments—discussion boards, reflective posts, essays, and multimedia assignments—are now easily replaced or augmented by AI tools capable of producing personalised, citation-matched work indistinguishable from human output. Detection tools and remote proctoring offer little protection and raise serious equity and ethical issues. Estaiteyeh warns that without systemic redesign, institutions risk issuing credentials that no longer guarantee genuine learning. He advocates integrating oral exams, experiential learning with external verification, and programme-level redesign to maintain authenticity and uphold academic integrity in the AI era.
Key Points
Asynchronous online courses face the highest risk of undetectable AI substitution.
Discussion boards, reflections, essays, and even citations can be convincingly AI-generated.
AI detectors and remote proctoring are unreliable, inequitable, and ethically problematic.
Oral exams and experiential assessments offer partial safeguards but require major redesign.
Institutions must invest in structural change or risk turning asynchronous programmes into “credential mills.”
The learning divide: A visual comparison highlights the potential pitfalls of relying on AI for “easy answers” versus the proven benefits of diligent study and engagement, as a new study suggests. Image (and typos) generated by Nano Banana.
Source
The Register
Summary
A new study published in PNAS Nexus finds that people who rely on ChatGPT or similar AI tools for research develop shallower understanding compared with those who gather information manually. Conducted by researchers from the University of Pennsylvania’s Wharton School and New Mexico State University, the study involved over 10,000 participants. Those using AI-generated summaries retained fewer facts, demonstrated less engagement, and produced advice that was shorter, less original, and less trustworthy. The findings reinforce concerns that overreliance on AI can “deskill” learners by replacing active effort with passive consumption. The researchers conclude that AI should support—not replace—critical thinking and independent study.
Key Points
Study of 10,000 participants compared AI-assisted and traditional research.
AI users showed shallower understanding and less factual recall.
AI summaries led to homogenised, less trustworthy responses.
Overreliance on AI risks reducing active learning and cognitive engagement.
Researchers recommend using AI as a support tool, not a substitute.
by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.
There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.
The Illusion of the All-Knowing Machine
Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.
Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?
Pulling Back the Curtain on AI
As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.
This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.
From Blind Trust to Critical Confidence
What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.
Tadhg Blommerde
Assistant Professor Northumbria University
Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.
His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.
The future of learning: Academic libraries are evolving into hubs where traditional knowledge meets cutting-edge AI, enhancing research and access to information. Image (and typos) generated by Nano Banana.
Source
Inside Higher Ed
Summary
A global Clarivate survey of more than 2,000 librarians across 109 countries shows that artificial intelligence adoption in libraries is accelerating, particularly within academic institutions. Sixty-seven percent of libraries are exploring or implementing AI, up from 63 percent in 2024, with academic libraries leading the trend. Their priorities include supporting student learning and improving content discovery. Libraries that provide AI training, resources, and leadership encouragement report the highest success and optimism. However, adoption and attitudes vary sharply by region—U.S. librarians remain the least optimistic—and by seniority, with senior leaders expressing greater confidence and favouring administrative applications.
Key Points
67% of libraries are exploring or using AI, up from 63% in 2024.
Academic libraries lead in adoption, focusing on student engagement and learning.
AI training and institutional support drive successful implementation.
Regional differences persist, with U.S. librarians least optimistic (7%).
Senior librarians show higher confidence and prefer AI for administrative efficiency.
An academic experiment unfolds: Visualizing the stark differences in engagement and performance between students who used AI and those who did not, as observed by one professor. Image (and typos) generated by Nano Banana.
Source
Gizmodo
Summary
A study by University of Massachusetts Amherst professor Christian Rojas compared two sections of the same advanced economics course—one permitted structured AI use, the other did not. The results revealed that allowing AI under clear guidelines improved student engagement, confidence, and reflective learning but did not affect exam performance. Students with AI access reported greater efficiency and satisfaction with course design while developing stronger habits of self-correction and critical evaluation of AI outputs. Rojas concludes that carefully scaffolded AI integration can enrich learning experiences without fostering dependency or academic shortcuts, though larger studies are needed.
Key Points
Structured AI use increased engagement and confidence but not exam scores.
Students used AI for longer, more focused sessions and reflective learning.
Positive perceptions grew regarding efficiency and instructor quality.
AI integration encouraged editing, critical thinking, and ownership of ideas.
Researchers stress that broader trials are required to validate results.