Triple threat to student success: Leaders in higher education are currently grappling with the complex and intertwined challenges of making college affordable, integrating AI responsibly, and ensuring robust diversity and inclusion across their institutions. Image (and typos) generated by Nano Banana.
Source
Inside Higher Ed
Summary
This article examines the concerns expressed by student-success leaders across U.S. higher education institutions, reflecting a convergence of affordability challenges, diversity commitments and the accelerating influence of generative AI. While administrators generally maintain confidence in institutional missions, they report increasing difficulty in evaluating authentic student engagement and learning outcomes due to widespread AI use. AI-assisted work can obscure students’ actual competencies, making early intervention and personalised support more complex. Leaders warn that inequitable access to advanced AI tools and differences in digital literacy may widen existing gaps for underrepresented groups. These concerns extend beyond teaching and assessment policies to broader institutional planning, prompting calls for staff training, student guidance frameworks and integrated AI governance strategies. The article suggests that institutions must adopt more holistic responses that acknowledge AI’s influence on retention, equity, affordability and long-term student success. AI is no longer a marginal pedagogical issue but an influential variable in strategic decision-making.
Key Points
AI seen as major pressure alongside affordability and DEI.
AI affects measurement of engagement and outcomes.
Philosophy meets the future: Examining the enduring relevance of Jean Baudrillard’s concepts of the hyperreal and simulacra, and how they eerily foreshadow the rise and impact of modern generative AI. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Bran Nicol argues that Jean Baudrillard’s cultural theory anticipated the logic and impact of today’s AI decades before its emergence. Through concepts such as simulacra, hyperreality and the disappearance of the real, Baudrillard foresaw a world in which screens, networks and digital proxies would replace direct human experience. He framed AI as a cognitive prosthetic: a device that simulates thought while encouraging humans to outsource thinking itself. Nicol highlights Baudrillard’s belief that such reliance risks eroding human autonomy and “exorcising” our humanness, not through machine domination but through our willingness to surrender judgement. Contemporary developments—AI actors, algorithmic companions and blurred boundaries between human and machine—demonstrate the uncanny accuracy of his predictions.
Key Points
Baudrillard predicted smartphone culture, hyperreality and AI-mediated life decades early.
He viewed AI as a prosthetic that produces the appearance of thought, not thought itself.
Outsourcing cognition risks diminishing human autonomy and “disappearing” the real.
Modern AI phenomena—deepfakes, AI influencers, chatbots—align with his theories.
He believed only human pleasure and embodied experience distinguished us from machines.
Questioning the digital degree: AI-generated work is forcing educators to reassess the integrity and perceived value of completion certificates for online courses. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Mohammed Estaiteyeh argues that generative AI has exposed fundamental weaknesses in asynchronous online learning, where instructors cannot observe students’ thinking or verify authorship. Traditional assessments—discussion boards, reflective posts, essays, and multimedia assignments—are now easily replaced or augmented by AI tools capable of producing personalised, citation-matched work indistinguishable from human output. Detection tools and remote proctoring offer little protection and raise serious equity and ethical issues. Estaiteyeh warns that without systemic redesign, institutions risk issuing credentials that no longer guarantee genuine learning. He advocates integrating oral exams, experiential learning with external verification, and programme-level redesign to maintain authenticity and uphold academic integrity in the AI era.
Key Points
Asynchronous online courses face the highest risk of undetectable AI substitution.
Discussion boards, reflections, essays, and even citations can be convincingly AI-generated.
AI detectors and remote proctoring are unreliable, inequitable, and ethically problematic.
Oral exams and experiential assessments offer partial safeguards but require major redesign.
Institutions must invest in structural change or risk turning asynchronous programmes into “credential mills.”
The learning divide: A visual comparison highlights the potential pitfalls of relying on AI for “easy answers” versus the proven benefits of diligent study and engagement, as a new study suggests. Image (and typos) generated by Nano Banana.
Source
The Register
Summary
A new study published in PNAS Nexus finds that people who rely on ChatGPT or similar AI tools for research develop shallower understanding compared with those who gather information manually. Conducted by researchers from the University of Pennsylvania’s Wharton School and New Mexico State University, the study involved over 10,000 participants. Those using AI-generated summaries retained fewer facts, demonstrated less engagement, and produced advice that was shorter, less original, and less trustworthy. The findings reinforce concerns that overreliance on AI can “deskill” learners by replacing active effort with passive consumption. The researchers conclude that AI should support—not replace—critical thinking and independent study.
Key Points
Study of 10,000 participants compared AI-assisted and traditional research.
AI users showed shallower understanding and less factual recall.
AI summaries led to homogenised, less trustworthy responses.
Overreliance on AI risks reducing active learning and cognitive engagement.
Researchers recommend using AI as a support tool, not a substitute.
by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.
There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.
The Illusion of the All-Knowing Machine
Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.
Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?
Pulling Back the Curtain on AI
As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.
This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.
From Blind Trust to Critical Confidence
What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.
Tadhg Blommerde
Assistant Professor Northumbria University
Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.
His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.