Report Reveals Potential of AI to Help UK Higher Education Sector Assess Its Research More Efficiently and Fairly


A stylized visual showing a network of research papers and data graphs being analyzed and sorted by a glowing, benevolent AI interface (represented by a digital hand) over the map of the United Kingdom, symbolizing efficiency and impartial assessment in academia. Image (and typos) generated by Nano Banana.
Streamlining academia: A new report illuminates how artificial intelligence can be leveraged to introduce greater efficiency and fairness into the complex process of assessing research within the UK’s higher education sector. Image (and typos) generated by Nano Banana.

Source

University of Bristol

Summary

This report highlights how UK universities are beginning to integrate generative AI into research assessment processes, marking a significant shift in institutional workflows. Early pilot programmes suggest that AI can assist in evaluating research outputs, managing reviewer assignments and streamlining administrative tasks associated with national research exercises. The potential benefits include increased consistency across assessments, reduced administrative burden and enhanced scalability for institutions with extensive research portfolios. Despite these advantages, the report underscores the importance of strong governance structures, transparent methodological frameworks and ongoing human oversight to ensure fairness, academic integrity and alignment with sector norms. The emerging consensus is that AI should serve as an augmenting tool rather than a replacement for expert judgement. Institutions are encouraged to take a measured approach that balances innovation with ethical responsibility while exploring long-term strategies for responsible adoption and sector-wide coordination. This marks a shift from viewing AI as a hypothetical tool for research assessment to recognising it as an active component of evolving academic practice.

Key Points

  • GenAI already used in UK HE for research assessment.
  • Potential efficiency gains in processing large volumes of research.
  • Increased standardisation of evaluation.
  • Governance and oversight essential.
  • Recommends controlled scaling across sector.

Keywords

URL

https://www.bristol.ac.uk/news/2025/november/report-reveals-potential-of-ai-to-help-assess-research-more-efficiently-.html

Summary generated by ChatGPT 5.1


Teaching the Future: How Tomorrow’s Music Educators Are Reimagining Pedagogy

By James Hanley, Oliver Harris, Caitlin Walsh, Sam Blanch, Dakota Venn-Keane, Eve Whelan, Luke Kiely, Jake Power, and Alex Rockett Power in collaboration with ChatGPT and Dr Hazel Farrell
Estimated reading time: 7 minutes
A group of eight music students from the BA (Hons) Music program at SETU are pictured in a futuristic, neon-lit "futureville" setting. They are gathered around a piano, which glows with digital accents, against a backdrop of towering, illuminated cityscapes and flowing data streams.
The future is now! BA (Hons) Music students from SETU in a vibrant “futureville” setting, blending the timeless artistry of music with cutting-edge technological imagination.

In recognition of how deeply AI is becoming embedded in the educational landscape, a co-created assignment exploring possibilities for music educators was considered timely. As part of the Year 3 Music Pedagogy module at South East Technological University (SETU), students were tasked with designing a learning activity that meaningfully integrated AI into the process. They were asked not only to create a resource but to trial it, evaluate it, and critically reflect on how AI shaped the learning experience. A wide range of free AI tools were used, including ChatGPT, SUNO, Audacity, Napkin, Google Gemini, Notebook LM, and Eleven Labs, and each student focused on a teaching resource that resonated with them, such as interactive tools, infographics, lesson plans, and basic websites.

Across their written and audio reflections, a rich picture emerged: AI is powerful, fallible, inspiring, frustrating, and always dependent on thoughtful human oversight. This blog is based on their reflections which reveal a generation of educators learning not just how to use AI, but why it must be used with care.

Expanding Pedagogical Possibilities

Students consistently highlighted AI’s ability to accelerate creativity and resource development. Several noted that AI made it easier to create visually engaging materials, such as diagrams, colourful flashcards, or child‑friendly graphics. One student reflected, “With just a click of the mouse, anyone can generate their own diagrams and flash cards for learning,” emphasising how AI allowed them to design tools they would otherwise struggle to produce manually.

Others explored AI‑generated musical content. One student used a sight‑reading generator to trial melodic exercises, observing that while the exercises themselves were well‑structured, “the feedback was exceedingly generous.” Another used ChatGPT to build a lesson structure, describing the process as “seamless and streamlined,” though still requiring adjustments to ensure accuracy and alignment with Irish terminology. One reflection explained, “AI can create an instrumental track in a completely different style, but it still needs human balance through EQ, compression, and reverb to make it sound natural.” This demonstrated how AI and hands-on editing can work together to develop both musical and technical skills.

An interactive rhythm game for children was designed by another student who used ChatGPT to progressively refine layout, colour schemes, difficulty levels, and supportive messages such as “Nice timing!” and “Perfect rhythm!” They described an iterative process requiring over 30 versions as the model continuously adapted to new instructions. The result was a working single‑player prototype that demonstrated both the creative potential and technical limits of AI‑assisted design.

The Teacher’s Role Remains Central

Across all reflections, students expressed strong awareness that AI cannot replace fundamental aspects of music teaching. Human judgment, accuracy, musical nuance, and relational connection were seen as irreplaceable. One student wrote that although AI can generate ideas and frameworks, “the underlying educational thinking remained a human responsibility.” Another reflected on voice‑training tools, noting that constant pitch guidance from AI could become “a crutch,” misleading students into believing they were singing correctly even when not. Many recognised that while AI can speed up creative processes, the emotional control, balance, and overall musical feel must still come from human input. One reflection put it simply: “AI gives you the idea, but people give it life.”

There was also a deep recognition of the social dimension of teaching. As one student put it, the “teacher–student relationship bears too much of an impact” to be substituted by automated tools. Many emphasised that confidence‑building, emotional support, and adaptive feedback come from real educators, not algorithms.

Challenges, Risks, and Ethical Considerations

The assignment surfaced several important realisations, including the fact that technical inaccuracies were common. Students identified incorrect musical examples, inconsistent notation, malfunctioning website features, and audio‑mixing problems. One student documented how, over time, the “quality of the site got worse,” illustrating AI’s tendency to forget earlier instructions in long interactions. This reinforced the need for rigorous verification when creating learning materials.

Another reflection noted that not all AI websites perform equally; some produce excellent results, while others generate distorted or incomplete outputs, forcing teachers to try multiple tools before finding one that works. It also reminded educators that even free or simple programs, like basic versions of Audacity, can still teach valuable mixing and editing skills without needing expensive software. A parallel concern was over‑reliance. Students worried that teachers might outsource too much planning to AI or that learners might depend on automated feedback rather than developing critical listening skills. As one reflection warned, “AI can and will become a key tool… the crucial factor is that we as real people know where the line is between a ‘tool’ and a ‘free worker.’”

Equity of access also arose as a barrier. Subscription‑based AI tools required credits or payment, creating challenges for students and highlighting ethical tensions between commercial technologies and educational use. Students demonstrated strong awareness of academic integrity. They distinguished between using AI to support structure and clarity versus allowing AI to generate entire lessons or presentations. One student cautioned that presenting AI‑produced content as one’s own is “blatant plagiarism,” highlighting the need for transparent and ethical practice.

Learning About Pedagogy and Professional Identity

Many students described developing a clearer sense of themselves as educators. They reflected on the complexity of communicating clearly, engaging learners, and designing accessible content. Some discovered gaps in their teaching confidence; others found new enthusiasm for pedagogical design. One wrote, “Teaching and clearly communicating my views was more challenging than I assumed,” acknowledging the shift from student to teacher mindset. Another recognised that while AI could support efficiency, it made them more aware of their responsibility for accuracy and learner experience.

Imagining the Future of AI in Music Education

Students were divided between optimism and caution. Some saw AI becoming a standard part of educational resource creation, enabling personalised practice, interactive learning, and rapid content generation. Others expressed concern about the possibility of AI replacing human instruction if not critically managed. However, all students agreed on one point: AI works best when treated as a supportive tool rather than an autonomous teacher. As one reflection summarised, “It is clear to me that AI is by no means a replacement for musical knowledge or teaching expertise.” Another added, “AI can make the process faster and more creative, but it still needs the human touch to sound right.”

Dr Hazel Farrell

Academic Lead for GenAI, Programme Leader BA (Hons) Music
South East Technological University

Dr Hazel Farrell is the SETU Academic Lead for Generative AI, and lead for the N-TUTORR National Gen AI Network project GenAI:N3, which aims to draw on expertise across the higher education sector to create a network and develop resources to support staff and students. She has presented her research on integrating AI into the classroom in a multitude of national and international forums focusing on topics such as Gen AI and student engagement, music education, assessment re-design, and UDL.

Keywords


‘We Could Have Asked ChatGPT’: Students Fight Back Over Course Taught by AI


A digital illustration of a group of diverse students standing in a classroom, looking frustrated and pointing towards an empty podium where a holographic projection of a generic AI avatar is visible. The text "WE COULD HAVE ASKED CHATGPT" is superimposed above the students. Image (and typos) generated by Nano Banana.
The revolt against automation: Capturing the frustration of students pushing back against educational institutions that rely on AI to replace human instructors. Image (and typos) generated by Nano Banana.

Source

The Guardian

Summary

Students on a coding apprenticeship at the University of Staffordshire say they were “robbed of knowledge” after discovering that large portions of their course materials—including slides, assignments and even voiceovers—were generated by AI. Despite university policies restricting students’ use of AI, staff appeared to rely heavily on AI-generated teaching content, leading to accusations of hypocrisy and declining trust in the programme. Students reported inconsistent editing, generic content and bizarre glitches such as a mid-video switch to a Spanish accent. Complaints brought little change, and although human lecturers delivered the final session, students argue the damage to their learning and career prospects has already been done. The case highlights rising tensions as universities increasingly adopt AI tools without transparent standards or safeguards.

Key Points

  • Staffordshire students discovered widespread use of AI-generated slides, tasks and videos.
  • AI usage contradicted strict policies prohibiting students from submitting AI-generated work.
  • Students reported generic content, inconsistent editing and AI voiceover glitches.
  • Repeated complaints yielded limited response; a human lecturer was added only at the end.
  • Students fear lost learning, reduced programme credibility and wasted time.

Keywords

URL

https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-large-part-by-ai-artificial-intelligence

Summary generated by ChatGPT 5


This is not the end but a beginning: Responding to “Something Wicked This Way Comes”

By Kerith George-Briant and Jack Hogan, Abertay University Dundee
Estimated reading time: 5 minutes
A conceptual illustration showing a digital roadmap splitting into two distinct, glowing paths, one labeled "Secure Assessment" and the other "Open Assessment." The background blends subtle academic motifs with swirling binary code, symbolizing the strategic integration of Generative AI into higher education assessment practices. Image (and typos) generated by Nano Banana.
Navigating the future: The “Two-Lane Approach” to Generative AI in assessment—balancing secure testing of threshold concepts (Lane 1) with open collaboration for developing AI literacy and critical thinking (Lane 2). Image (and typos) generated by Nano Banana.

O’Mahony’s provocatively titled “Something Wicked This Way Comes” blog outlined feelings we recognised from across the sector, which were that Generative AI (GenAI) tools have created unease, disruption, and uncertainty. In addition, we felt that GenAI provided huge opportunities, and as higher education has led and celebrated innovation in all disciplines over centuries, how this translated into our assessment practices intrigued us. 

At Abertay University, we’ve been exploring the “wicked problem” of whether to change teaching practices through a small-scale research project entitled “Lane Change Ahead: Artificial Intelligence’s Impact on Assessment Practices.” Our findings agree with O’Mahony’s observations that while GenAI does pose a challenge to academic integrity and traditional assessment models, it also offers opportunities for innovation, equity, and deeper learning, but we must respond thoughtfully and acknowledge that there are a variety of views on GenAI.

Academic Sensemaking

To understand colleagues’ perspectives and experiences, we applied Degn’s (2016) concept of academic sensemaking to understand how the colleagues we interviewed felt about GenAI. Findings showed that some assessment designers are decoupling, designing assessments that use GenAI outputs without requiring students to engage with the tools. Others are defiant or defeatist, allowing limited collaboration with GenAI tools but awarding a low percentage of the grade to that output. And some are strategic and optimistic, embracing GenAI as a tool for learning, creativity, and employability.

The responses show the reasons for unease are not just pedagogical; they’re deeply personal. GenAI challenges academic identity. Recognising this emotional response is essential to supporting staff if change is needed.

Detection and the Blurred Line

And change is needed, we would argue. Back in 2023, Perkin et al’s analysis of Turnitin’s AI detection capabilities revealed that while 91% of fully AI-generated submissions were flagged, the average detection within each paper was only 54.8% and only half of those flagged papers would have been referred for academic misconduct. Similar studies since then have continued to show the same types of results. And if detection isn’t possible, setting an absurd line as referred to by Corbin et al is ever more incongruous. There is no reliable way to indicate whether a student has stopped at the point of using AI for brainstorming or has engaged critically with AI paraphrased output. Some may read this and think that it’s game over, however if we embrace these challenges and adapt our approaches, we find solutions that are fit for purpose.

From Fear to Framework: The Two-Lane Approach

So, what is the solution? Our research explored whether the two-lane approach developed by Liu and Bridgeman would work at Abertay, where:

  • Lane 1: Secure Assessments would be conducted under controlled conditions to assure learning of threshold concepts and
  • Lane 2: Open Assessments would allow unrestricted use of GenAI.

Our case studies revealed three distinct modes of GenAI integration:

  • AI Output Only – Students critiqued AI-generated content without using GenAI themselves. This aligned with Lane 1 and a secure assessment method focusing on threshold concepts.
  • Limited Collaboration – Students used GenAI for planning and a minimal piece of output within a larger piece of assessment, which did not allow GenAI use. Students developed some critical thinking, but weren’t able to apply this learning to the whole assessment.
  • Unlimited Collaboration – Students were fully engaged with GenAI, with reflection and justification built into the assessment. Assessment designers reporting that students produced higher quality work and demonstrated enhanced critical thinking.

Each mode reflected a different balance of trust, control, and pedagogical intent. Interestingly, the AI Output pieces were secure and used to build AI literacy while meeting PSRB requirements, which asked for certain competencies and skills to be tested. The limited collaboration had an element of open assessment, but the percentage of the grade awarded to the output was minimal, and an absurd line was created by asking for no AI use in the larger part of the assessment. Finally, the assessments with unlimited collaboration were designed because those colleagues believed that writing without GenAI was not authentic, and they believed that employers would expect AI literacy skills, perhaps not misplaced based on the figure given in O’Mahony’s blog.

Reframing the Narrative: GenAI as Opportunity

We see the need to treat GenAI as a partner in education, one that encourages critical reflection. This will require carefully scaffolded teaching activities to develop the AI literacy of students and avoid cognitive offloading. Thankfully, ways forward have begun to appear, as noted in the work of Gerlick and Jose et al.

Conclusion: From Wicked to ‘Witch’ lane?

As educators, we have a choice. We can resist, decouple from GenAI or we can choose to lead the narrative strategically and optimistically. Although the pathway forward may not be a yellow brick road, we believe it’s worth considering which lane may suit us best. The key is that we don’t do this in isolation, but we take a pragmatic approach across our entire degree programme considering the level of study and the appropriate AI literacy skills.

GenAI acknowledgement:
Microsoft Copilot (https://copilot.microsoft.com) – used to create a draft blog from our research paper.

Kerith George-Briant

Learner Development Manager
Abertay University

Kerith George-Briant manages the Learner Development Service at Abertay. Her key interests are in building best practices in using AI, inclusivity, and accessibility.

Jack Hogan

Lecturer in Academic Practice
Abertay University

Jack Hogan works within the Abertay Learning Enhancement (AbLE) Academy as a Lecturer in Academic Practice. His research interests include student transitions and the first-year experience, microcredentials, skills development and employability. 


Keywords


AI Could Revolutionise Higher Education in a Way We Did Not Expect

by Brian Mulligan – e-learning consultant with Universal Learning Systems (ulsystems.com)
Estimated reading time: 5 minutes
grand, expansive, and ornate university library or academic hall with high ceilings and classical architecture. In the center, a towering, swirling helix of glowing blue digital data, code, books, and educational icons rises dramatically, representing the transformative power of AI. Around the hall, students are seated at tables with glowing laptops, and many more students are walking and interacting. Holographic projections of famous busts and academic figures are subtly integrated into the scene. The entire environment is infused with a futuristic, digital glow. Image (and typos) generated by Nano Banana.
Artificial intelligence is poised to unleash a revolution in higher education, not in the ways we’ve conventionally imagined, but through unexpected and profound transformations. This image visualises AI as a central, dynamic force reshaping academic landscapes, curriculum delivery, and the very nature of learning in universities. Image (and typos) generated by Nano Banana.

The current conversation about Artificial Intelligence (AI) in higher education primarily focuses on efficiency and impact. People talk about how AI can personalise learning, streamline administrative tasks, and help colleges “do more with less.” For decades, every new technology, from online training to MOOCs, promised a similar transformation. Generative AI certainly offers powerful tools to enhance existing processes.

However, perhaps the revolutionary potential of AI in higher education may come from a more critical and urgent pressure: its significant challenge to the integrity of academic credentials and the learning processes they are supposed to represent.

Historically, colleges haven’t had a strong incentive to completely overhaul their teaching models just because new technology arrived. Traditional lectures, established assessment methods, and the value of a physical campus have remained largely entrenched. Technology usually just served to augment existing practices, not to transform the underlying structures of teaching, learning, and accreditation.

AI, however, may be a different kind of catalyst for change.

The Integrity Challenge

AI’s ability to create human-quality text, solve complex problems, and produce creative outputs has presented a serious challenge to academic integrity. Reports show a significant rise in AI-driven cheating, with many students now routinely using these tools to complete their coursework. For a growing number of students, offloading cognitive labour, from summarising readings to generating entire essays, to AI is becoming the new norm.

This widespread and mostly undetectable cheating compromises the entire purpose of assessment: to verify genuine learning and award credible qualifications. Even students committed to authentic learning feel compromised, forced to compete against peers using AI for an unfair advantage.

Crucially, even when AI use is approved, there’s a legitimate concern that it can undermine the learning process itself. If students rely on AI for foundational tasks like summarisation and idea generation, they may bypass the essential cognitive engagement and critical thinking development. This reliance can lead to intellectual laziness, meaning the credentials universities bestow may no longer reliably signify genuine knowledge and skills. This creates an urgent imperative for institutions to act.

The Shift to Authentic Learning

While many believe we can address this just by redesigning assignments, the challenge offers, and may even require, a structural shift towards more radical educational models. These new approaches,which have been emerging to address the challenges of quality, access and cost, may also prove to be the most effective ways of addressing academic integrity challenges.

To illustrate the point, let’s look at three examples of such emerging models:

  1. Flipped Learning: Students engage with core content independently online. Valuable in-person time is then dedicated to active learning like problem-solving, discussions, and collaborative projects. Educators can directly observe the application of knowledge, allowing for a more authentic assessment of understanding.
  2. Project-Based Learning (PBL): Often seen as an integrated flipped model, PBL immerses students in complex, integrated projects over extended periods. The focus is on applying knowledge from multiple modules and independent research to solve real-world problems. These projects demand sustained, supervised engagement, creative synthesis, and complex problem-solving, capabilities that are very hard to simply outsource to AI.
  3. Work-Based Learning (WBL): A significant part of the student’s journey takes place in authentic workplace settings. The emphasis shifts entirely to the demonstrable application of skills and knowledge in genuine professional contexts, a feat AI alone cannot achieve. Assessment moves to evaluating how a student performs and reflects in their role, including how they effectively and ethically integrate AI tools professionally.

AI as the Enabler of Change

Shifting to these models isn’t easy. Can institutions afford the resources to develop rich content, intricate project designs, and robust supervisory frameworks? Creating and assessing numerous, varied, and authentic tasks requires significant time and financial investment.

This is where technology, now including AI itself, becomes the key enabler for the feasibility of these new pedagogical approaches. Learning technologies, intelligently deployed, can help by:

  • Affordably Creating Content: AI tools rapidly develop diverse learning materials, including texts, videos and formative quizzes as well as more sophisticated assessment designs.
  • Providing Automated Learning Support: AI-powered tutors and chatbots offer 24/7 support, guiding students through challenging material, which personalises the learning journey.
  • Monitoring Independent Work: Learning analytics, enhanced by AI, track student engagement and flag struggling individuals. This allows educators to provide timely, targeted human intervention.
  • Easing the Assessment Burden: Technology can streamline the heavy workload associated with more varied assignments. Simple digital tools like structured rubrics and templated feedback systems free up educator time for nuanced, human guidance.

In summary, the most significant impact of AI isn’t the familiar promise of doing things better or faster. By undermining traditional methods of learning verification through the ease of academic dishonesty, AI has created an unavoidable pressure for systemic change. It forces colleges to reconsider what they are assessing and what value their degrees truly represent.

It’s that AI, by challenging the old system so thoroughly, makes the redesign of higher education a critical necessity.

Brian Mulligan

E-learning Consultant
Universal Learning Systems (ulsystems.com)

Brian Mulligan is an e-learning consultant with Universal Learning Systems (ulsystems.com) having retired as Head of Online Learning Innovation at Atlantic Technological University in Sligo in 2022. His current interests include innovative models of higher education and the strategic use of learning technologies in higher education.


Keywords