The revolt against automation: Capturing the frustration of students pushing back against educational institutions that rely on AI to replace human instructors. Image (and typos) generated by Nano Banana.
Source
The Guardian
Summary
Students on a coding apprenticeship at the University of Staffordshire say they were “robbed of knowledge” after discovering that large portions of their course materials—including slides, assignments and even voiceovers—were generated by AI. Despite university policies restricting students’ use of AI, staff appeared to rely heavily on AI-generated teaching content, leading to accusations of hypocrisy and declining trust in the programme. Students reported inconsistent editing, generic content and bizarre glitches such as a mid-video switch to a Spanish accent. Complaints brought little change, and although human lecturers delivered the final session, students argue the damage to their learning and career prospects has already been done. The case highlights rising tensions as universities increasingly adopt AI tools without transparent standards or safeguards.
Key Points
Staffordshire students discovered widespread use of AI-generated slides, tasks and videos.
AI usage contradicted strict policies prohibiting students from submitting AI-generated work.
Students reported generic content, inconsistent editing and AI voiceover glitches.
Repeated complaints yielded limited response; a human lecturer was added only at the end.
Students fear lost learning, reduced programme credibility and wasted time.
Global collaboration in the age of AI: UWA and Oxford University join forces to pioneer the integration and study of generative artificial intelligence within the landscape of higher education. Image (and typos) generated by Nano Banana.
Source
University of Western Australia
Summary
The University of Western Australia and the University of Oxford announced a formal partnership that positions generative AI as a strategic driver in the future of higher education. The collaboration focuses on advancing responsible AI research, developing governance models and integrating generative AI into teaching and learning in ways that uphold academic integrity and inclusivity. Both institutions highlight that the rapid acceleration of AI requires coordinated international responses that balance innovation with ethical safeguards. The partnership will explore curriculum transformation, staff development and AI-informed pedagogical frameworks intended to support both student learning and broader institutional capability building. By aligning two globally significant universities, the initiative signals a trend toward cross-border cooperation designed to shape sector-wide AI standards. It also indicates growing recognition that AI adoption in higher education must be underpinned by shared values, transparent methodologies and research-based evidence. This collaboration aims to become a blueprint for how universities can jointly shape the future of AI-enabled education while ensuring that human expertise remains central.
Key Points
Major partnership between UWA and Oxford to advance responsible AI
Focus on governance, research and curriculum innovation
Reflects global shift toward collaboration on AI strategy
Emphasises ethical frameworks for AI adoption in higher education
Positions AI as core to long-term institutional development
By Kerith George-Briant and Jack Hogan, Abertay UniversityDundee
Estimated reading time: 5 minutes
Navigating the future: The “Two-Lane Approach” to Generative AI in assessment—balancing secure testing of threshold concepts (Lane 1) with open collaboration for developing AI literacy and critical thinking (Lane 2). Image (and typos) generated by Nano Banana.
O’Mahony’s provocatively titled “Something Wicked This Way Comes” blog outlined feelings we recognised from across the sector, which were that Generative AI (GenAI) tools have created unease, disruption, and uncertainty. In addition, we felt that GenAI provided huge opportunities, and as higher education has led and celebrated innovation in all disciplines over centuries, how this translated into our assessment practices intrigued us.
At Abertay University, we’ve been exploring the “wicked problem” of whether to change teaching practices through a small-scale research project entitled “Lane Change Ahead: Artificial Intelligence’s Impact on Assessment Practices.” Our findings agree with O’Mahony’s observations that while GenAI does pose a challenge to academic integrity and traditional assessment models, it also offers opportunities for innovation, equity, and deeper learning, but we must respond thoughtfully and acknowledge that there are a variety of views on GenAI.
Academic Sensemaking
To understand colleagues’ perspectives and experiences, we applied Degn’s (2016) concept of academic sensemaking to understand how the colleagues we interviewed felt about GenAI. Findings showed that some assessment designers are decoupling, designing assessments that use GenAI outputs without requiring students to engage with the tools. Others are defiant or defeatist, allowing limited collaboration with GenAI tools but awarding a low percentage of the grade to that output. And some are strategic and optimistic, embracing GenAI as a tool for learning, creativity, and employability.
The responses show the reasons for unease are not just pedagogical; they’re deeply personal. GenAI challenges academic identity. Recognising this emotional response is essential to supporting staff if change is needed.
Detection and the Blurred Line
And change is needed, we would argue. Back in 2023, Perkin et al’s analysis of Turnitin’s AI detection capabilities revealed that while 91% of fully AI-generated submissions were flagged, the average detection within each paper was only 54.8% and only half of those flagged papers would have been referred for academic misconduct. Similar studies since then have continued to show the same types of results. And if detection isn’t possible, setting an absurd line as referred to by Corbin et al is ever more incongruous. There is no reliable way to indicate whether a student has stopped at the point of using AI for brainstorming or has engaged critically with AI paraphrased output. Some may read this and think that it’s game over, however if we embrace these challenges and adapt our approaches, we find solutions that are fit for purpose.
From Fear to Framework: The Two-Lane Approach
So, what is the solution? Our research explored whether the two-lane approach developed by Liu and Bridgeman would work at Abertay, where:
Lane 1: Secure Assessments would be conducted under controlled conditions to assure learning of threshold concepts and
Lane 2: Open Assessments would allow unrestricted use of GenAI.
Our case studies revealed three distinct modes of GenAI integration:
AI Output Only – Students critiqued AI-generated content without using GenAI themselves. This aligned with Lane 1 and a secure assessment method focusing on threshold concepts.
Limited Collaboration – Students used GenAI for planning and a minimal piece of output within a larger piece of assessment, which did not allow GenAI use. Students developed some critical thinking, but weren’t able to apply this learning to the whole assessment.
Unlimited Collaboration – Students were fully engaged with GenAI, with reflection and justification built into the assessment. Assessment designers reporting that students produced higher quality work and demonstrated enhanced critical thinking.
Each mode reflected a different balance of trust, control, and pedagogical intent. Interestingly, the AI Output pieces were secure and used to build AI literacy while meeting PSRB requirements, which asked for certain competencies and skills to be tested. The limited collaboration had an element of open assessment, but the percentage of the grade awarded to the output was minimal, and an absurd line was created by asking for no AI use in the larger part of the assessment. Finally, the assessments with unlimited collaboration were designed because those colleagues believed that writing without GenAI was not authentic, and they believed that employers would expect AI literacy skills, perhaps not misplaced based on the figure given in O’Mahony’s blog.
Reframing the Narrative: GenAI as Opportunity
We see the need to treat GenAI as a partner in education, one that encourages critical reflection. This will require carefully scaffolded teaching activities to develop the AI literacy of students and avoid cognitive offloading. Thankfully, ways forward have begun to appear, as noted in the work of Gerlick and Jose et al.
Conclusion: From Wicked to ‘Witch’ lane?
As educators, we have a choice. We can resist, decouple from GenAI or we can choose to lead the narrative strategically and optimistically. Although the pathway forward may not be a yellow brick road, we believe it’s worth considering which lane may suit us best. The key is that we don’t do this in isolation, but we take a pragmatic approach across our entire degree programme considering the level of study and the appropriate AI literacy skills.
GenAI acknowledgement: Microsoft Copilot (https://copilot.microsoft.com) – used to create a draft blog from our research paper.
Kerith George-Briant
Learner Development Manager Abertay University
Kerith George-Briant manages the Learner Development Service at Abertay. Her key interests are in building best practices in using AI, inclusivity, and accessibility.
Jack Hogan
Lecturer in Academic Practice Abertay University
Jack Hogan works within the Abertay Learning Enhancement (AbLE) Academy as a Lecturer in Academic Practice. His research interests include student transitions and the first-year experience, microcredentials, skills development and employability.
The educational overhaul: Universities are in a frantic race to adapt their curricula, ensuring their students are equipped for a job market and world fundamentally reshaped by artificial intelligence. Image (and typos) generated by Nano Banana.
Source
ScienceBlog – NeuroEdge
Summary
A new study in Frontiers of Digital Education argues that higher education must fundamentally redesign curricula to keep pace with rapid AI advancement. Led by researchers at Lanzhou Petrochemical University of Vocational Technology, the paper warns that traditional curriculum cycles are too slow for a world where generative AI is already standard in workplaces. It proposes a comprehensive framework built on AI literacy, ethical use, interdisciplinary integration and continuous updating. The authors emphasise a tiered model of AI learning—from core literacy for all students to advanced training for specialists—and call for modular course design, industry partnerships and cultural change within universities. Without sweeping reform, they argue, institutions risk preparing students for a world that no longer exists.
Key Points
AI is reshaping what and how universities must teach, creating urgency for reform.
Study identifies AI literacy as essential for every student, regardless of discipline.
Recommends a tiered AI curriculum: foundational, applied and specialist levels.
Calls for modular, continuously updated courses aligned with fast-moving AI developments.
Argues for cultural change: interdisciplinary collaboration, new assessment models and faculty training.
The automation paradox: Experts warn that while AI drives efficiency, its widespread adoption in education may inadvertently erode the crucial cognitive and creative skills US students need to thrive in a future dominated by technology. Image (and typos) generated by Nano Banana.
Source
Times of India (Education International Desk)
Summary
This article explores concerns that widespread adoption of AI tools in education may undermine essential skills that students require for long-term success in an increasingly automated world. Educators and analysts interviewed argue that easy access to generative AI for writing, problem solving and research may weaken students’ capacity for critical thinking, creativity and independent judgement. They note that while AI can accelerate tasks, it may also reduce opportunities for deep learning and cognitive struggle, both of which are crucial for intellectual development. The article raises concerns that students who rely heavily on AI may experience diminished confidence in producing original work and solving complex problems without technological support. Experts recommend curriculum renewal that blends responsible AI literacy with explicit instruction in foundational skills, ensuring that students can use AI effectively without sacrificing their broader intellectual growth. The discussion reflects a recurring theme in the global AI-in-education debate: the need to preserve human expertise and cognitive resilience in an era of pervasive automation. The article calls for educators, policymakers and institutions to strike a balance between embracing AI and safeguarding human capabilities.
Key Points
Widespread AI use may weaken foundational cognitive skills
Risks include reduced independent thinking and reduced confidence
Educators call for curriculum redesign with balanced AI integration