Building the Manifesto: How We Got Here and What Comes Next

By Ken McCarthy
Estimated reading time: 6 minutes
A minimalist illustration featuring the silhouette of a person standing and gazing toward a horizon line formed by soft, glowing digital patterns and abstract light streams. The scene blends naturalistic contemplation with modern technology, symbolizing human agency in shaping the future of AI against a clean, neutral background. Image (and typos) generated by Nano Banana.
Looking ahead: As we navigate the complexities of generative AI in higher education, it is crucial to remember that technology does not dictate our path. Through ethical inquiry and reimagined learning, the horizon is still ours to shape. Image (and typos) generated by Nano Banana.

When Hazel and I started working with GenAI in higher education, we did not set out to write a manifesto. We were simply trying to make sense of a fast-moving landscape. GenAI arrived quickly, finding its way into classrooms and prompting new questions about academic integrity and AI integration long before we had time to work through what it all meant. Students were experimenting earlier than many staff felt prepared for. Policies were still forming.

What eventually became the Manifesto for Generative AI in Higher Education began as our attempt to capture our thoughts. Not a policy, not a fully fledged framework, not a strategy. Just a way to hold the questions, principles, and tensions that kept surfacing. It took shape through notes gathered in margins, comments shared after workshops, ideas exchanged in meetings, and moments in teaching sessions that stayed with us long after they ended. It was never a single project. It gathered itself slowly.

From the start, we wanted it to be a short read that opened the door to big ideas. The sector already has plenty of documents that run to seventy or eighty pages. Many of them are helpful, but they can be difficult to take into a team meeting or a coffee break. We wanted something different. Something that could be read in ten minutes, but still spark thought and conversation. A series of concise statements that felt recognisable to anyone grappling with the challenges and possibilities of GenAI. A document that holds principles without pretending to offer every answer. We took inspiration from the Edinburgh Manifesto for Teaching Online, which reminded us that a series of short, honest statements can travel further than a long policy ever will.

The manifesto is a living reflection. It recognises that we stand at a threshold between what learning has been and what it might become. GenAI brings possibility and uncertainty together, and our role is to respond with imagination and integrity to keep learning a deeply human act .

Three themes shaped the work

As the ideas settled, three themes emerged that helped give structure to the thirty statements.

Rethinking teaching and learning responds to an age of abundance. Information is everywhere. The task of teaching shifts toward helping students interpret, critique, and question rather than collect. Inquiry becomes central. Several statements address this shift, emphasising that GenAI does not replace thinking. It reveals the cost of not thinking. They point toward assessment design that rewards insight over detection and remind us that curiosity drives learning in ways that completion never can .

Responsibility, ethics, and power acknowledges that GenAI is shaped by datasets, values, and omissions. It is not neutral. This theme stresses transparency, ethical leadership, and the continuing importance of academic judgement. It challenges institutions to act with care, not just efficiency. It highlights that prompting is an academic skill, not a technical trick, and that GenAI looks different in every discipline, which means no single approach will fit all contexts.

Imagination, humanity, and the future encourages us to look beyond the disruption of the present moment and ask what we want higher education to become. It holds inclusion as a requirement rather than an aspiration. It names sustainability as a learning outcome. It insists that ethics belong at the beginning of design processes. It ends with the reminder that the horizon is still ours to shape and that the future classroom is a conversation where people and systems learn in dialogue without losing sight of human purpose

How it came together

The writing process was iterative. Some statements arrived whole. Others needed several attempts. We removed the ones that tried to do too much and kept the ones that stayed clear in the mind after a few days. We read them aloud to test the rhythm. The text only settled into its final shape once we noticed the three themes forming naturally.

The feedback from our reviewers, Tom Farrelly and Sue Beckingham, strengthened the final version. Their comments helped us tighten the language and balance the tone. The manifesto may have two named authors, but it is built from many voices.

Early responses from the sector

In the short time since the manifesto was released, the webpage has been visited by more than 750 people from 40 countries. For a document that began as a few lines in a notebook, this has been encouraging. It suggests the concerns and questions we tried to capture are widely shared. More importantly, it signals that there is an appetite for a conversation that is thoughtful, practical, and honest about the pace of change.

This early engagement reinforces something we felt from the start. The manifesto is only the beginning. It is not a destination. It is a point of departure for a shared journey.

Next steps: a book of voices across the sector

To continue that journey, we are developing a book of short essays and chapters that respond to the manifesto. Each contribution will explore a statement within the document. The chapters will be around 1,000 words. They can draw on practice, research, disciplinary experience, student partnership, leadership, policy, or critique. They can support, question, or challenge the manifesto. The aim is not agreement. The aim is insight.

We want to bring together educators, librarians, technologists, academic developers, researchers, students, and professional staff. The only requirement is that contributors have something to say about how GenAI is affecting their work, their discipline, or their students.

An invitation to join us

If you would like to contribute, we would welcome your expression of interest. You do not need specialist expertise in AI. You only need a perspective that might help the sector move forward with clarity and confidence.

Your chapter should reflect on a single statement. It could highlight emerging practice or ask questions that do not yet have answers. It could bring a disciplinary lens or a broader institutional one.

The manifesto was built from shared conversations. The next stage will be shaped by an even wider community. If this work is going to stay alive, it needs many hands.

The horizon is still ours to shape. If you would like to help shape it with us, please submit an expression of interest through the following link: https://forms.gle/fGTR9tkZrK1EeoLH8

Ken McCarthy

Head of Centre for Academic Practice
South East Technological University

As Head of the Centre for Academic Practice at SETU, I lead strategic initiatives to enhance teaching, learning, and assessment across the university. I work collaboratively with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education.
I’m passionate about fostering academic excellence through professional development, curriculum design, and scholarship of teaching and learning. I also support and drive innovation in digital pedagogy and learning spaces.

Keywords


Report Reveals Potential of AI to Help UK Higher Education Sector Assess Its Research More Efficiently and Fairly


A stylized visual showing a network of research papers and data graphs being analyzed and sorted by a glowing, benevolent AI interface (represented by a digital hand) over the map of the United Kingdom, symbolizing efficiency and impartial assessment in academia. Image (and typos) generated by Nano Banana.
Streamlining academia: A new report illuminates how artificial intelligence can be leveraged to introduce greater efficiency and fairness into the complex process of assessing research within the UK’s higher education sector. Image (and typos) generated by Nano Banana.

Source

University of Bristol

Summary

This report highlights how UK universities are beginning to integrate generative AI into research assessment processes, marking a significant shift in institutional workflows. Early pilot programmes suggest that AI can assist in evaluating research outputs, managing reviewer assignments and streamlining administrative tasks associated with national research exercises. The potential benefits include increased consistency across assessments, reduced administrative burden and enhanced scalability for institutions with extensive research portfolios. Despite these advantages, the report underscores the importance of strong governance structures, transparent methodological frameworks and ongoing human oversight to ensure fairness, academic integrity and alignment with sector norms. The emerging consensus is that AI should serve as an augmenting tool rather than a replacement for expert judgement. Institutions are encouraged to take a measured approach that balances innovation with ethical responsibility while exploring long-term strategies for responsible adoption and sector-wide coordination. This marks a shift from viewing AI as a hypothetical tool for research assessment to recognising it as an active component of evolving academic practice.

Key Points

  • GenAI already used in UK HE for research assessment.
  • Potential efficiency gains in processing large volumes of research.
  • Increased standardisation of evaluation.
  • Governance and oversight essential.
  • Recommends controlled scaling across sector.

Keywords

URL

https://www.bristol.ac.uk/news/2025/november/report-reveals-potential-of-ai-to-help-assess-research-more-efficiently-.html

Summary generated by ChatGPT 5.1


Teaching the Future: How Tomorrow’s Music Educators Are Reimagining Pedagogy

By James Hanley, Oliver Harris, Caitlin Walsh, Sam Blanch, Dakota Venn-Keane, Eve Whelan, Luke Kiely, Jake Power, and Alex Rockett Power in collaboration with ChatGPT and Dr Hazel Farrell
Estimated reading time: 7 minutes
A group of eight music students from the BA (Hons) Music program at SETU are pictured in a futuristic, neon-lit "futureville" setting. They are gathered around a piano, which glows with digital accents, against a backdrop of towering, illuminated cityscapes and flowing data streams.
The future is now! BA (Hons) Music students from SETU in a vibrant “futureville” setting, blending the timeless artistry of music with cutting-edge technological imagination.

In recognition of how deeply AI is becoming embedded in the educational landscape, a co-created assignment exploring possibilities for music educators was considered timely. As part of the Year 3 Music Pedagogy module at South East Technological University (SETU), students were tasked with designing a learning activity that meaningfully integrated AI into the process. They were asked not only to create a resource but to trial it, evaluate it, and critically reflect on how AI shaped the learning experience. A wide range of free AI tools were used, including ChatGPT, SUNO, Audacity, Napkin, Google Gemini, Notebook LM, and Eleven Labs, and each student focused on a teaching resource that resonated with them, such as interactive tools, infographics, lesson plans, and basic websites.

Across their written and audio reflections, a rich picture emerged: AI is powerful, fallible, inspiring, frustrating, and always dependent on thoughtful human oversight. This blog is based on their reflections which reveal a generation of educators learning not just how to use AI, but why it must be used with care.

Expanding Pedagogical Possibilities

Students consistently highlighted AI’s ability to accelerate creativity and resource development. Several noted that AI made it easier to create visually engaging materials, such as diagrams, colourful flashcards, or child‑friendly graphics. One student reflected, “With just a click of the mouse, anyone can generate their own diagrams and flash cards for learning,” emphasising how AI allowed them to design tools they would otherwise struggle to produce manually.

Others explored AI‑generated musical content. One student used a sight‑reading generator to trial melodic exercises, observing that while the exercises themselves were well‑structured, “the feedback was exceedingly generous.” Another used ChatGPT to build a lesson structure, describing the process as “seamless and streamlined,” though still requiring adjustments to ensure accuracy and alignment with Irish terminology. One reflection explained, “AI can create an instrumental track in a completely different style, but it still needs human balance through EQ, compression, and reverb to make it sound natural.” This demonstrated how AI and hands-on editing can work together to develop both musical and technical skills.

An interactive rhythm game for children was designed by another student who used ChatGPT to progressively refine layout, colour schemes, difficulty levels, and supportive messages such as “Nice timing!” and “Perfect rhythm!” They described an iterative process requiring over 30 versions as the model continuously adapted to new instructions. The result was a working single‑player prototype that demonstrated both the creative potential and technical limits of AI‑assisted design.

The Teacher’s Role Remains Central

Across all reflections, students expressed strong awareness that AI cannot replace fundamental aspects of music teaching. Human judgment, accuracy, musical nuance, and relational connection were seen as irreplaceable. One student wrote that although AI can generate ideas and frameworks, “the underlying educational thinking remained a human responsibility.” Another reflected on voice‑training tools, noting that constant pitch guidance from AI could become “a crutch,” misleading students into believing they were singing correctly even when not. Many recognised that while AI can speed up creative processes, the emotional control, balance, and overall musical feel must still come from human input. One reflection put it simply: “AI gives you the idea, but people give it life.”

There was also a deep recognition of the social dimension of teaching. As one student put it, the “teacher–student relationship bears too much of an impact” to be substituted by automated tools. Many emphasised that confidence‑building, emotional support, and adaptive feedback come from real educators, not algorithms.

Challenges, Risks, and Ethical Considerations

The assignment surfaced several important realisations, including the fact that technical inaccuracies were common. Students identified incorrect musical examples, inconsistent notation, malfunctioning website features, and audio‑mixing problems. One student documented how, over time, the “quality of the site got worse,” illustrating AI’s tendency to forget earlier instructions in long interactions. This reinforced the need for rigorous verification when creating learning materials.

Another reflection noted that not all AI websites perform equally; some produce excellent results, while others generate distorted or incomplete outputs, forcing teachers to try multiple tools before finding one that works. It also reminded educators that even free or simple programs, like basic versions of Audacity, can still teach valuable mixing and editing skills without needing expensive software. A parallel concern was over‑reliance. Students worried that teachers might outsource too much planning to AI or that learners might depend on automated feedback rather than developing critical listening skills. As one reflection warned, “AI can and will become a key tool… the crucial factor is that we as real people know where the line is between a ‘tool’ and a ‘free worker.’”

Equity of access also arose as a barrier. Subscription‑based AI tools required credits or payment, creating challenges for students and highlighting ethical tensions between commercial technologies and educational use. Students demonstrated strong awareness of academic integrity. They distinguished between using AI to support structure and clarity versus allowing AI to generate entire lessons or presentations. One student cautioned that presenting AI‑produced content as one’s own is “blatant plagiarism,” highlighting the need for transparent and ethical practice.

Learning About Pedagogy and Professional Identity

Many students described developing a clearer sense of themselves as educators. They reflected on the complexity of communicating clearly, engaging learners, and designing accessible content. Some discovered gaps in their teaching confidence; others found new enthusiasm for pedagogical design. One wrote, “Teaching and clearly communicating my views was more challenging than I assumed,” acknowledging the shift from student to teacher mindset. Another recognised that while AI could support efficiency, it made them more aware of their responsibility for accuracy and learner experience.

Imagining the Future of AI in Music Education

Students were divided between optimism and caution. Some saw AI becoming a standard part of educational resource creation, enabling personalised practice, interactive learning, and rapid content generation. Others expressed concern about the possibility of AI replacing human instruction if not critically managed. However, all students agreed on one point: AI works best when treated as a supportive tool rather than an autonomous teacher. As one reflection summarised, “It is clear to me that AI is by no means a replacement for musical knowledge or teaching expertise.” Another added, “AI can make the process faster and more creative, but it still needs the human touch to sound right.”

Dr Hazel Farrell

Academic Lead for GenAI, Programme Leader BA (Hons) Music
South East Technological University

Dr Hazel Farrell is the SETU Academic Lead for Generative AI, and lead for the N-TUTORR National Gen AI Network project GenAI:N3, which aims to draw on expertise across the higher education sector to create a network and develop resources to support staff and students. She has presented her research on integrating AI into the classroom in a multitude of national and international forums focusing on topics such as Gen AI and student engagement, music education, assessment re-design, and UDL.

Keywords


‘We Could Have Asked ChatGPT’: Students Fight Back Over Course Taught by AI


A digital illustration of a group of diverse students standing in a classroom, looking frustrated and pointing towards an empty podium where a holographic projection of a generic AI avatar is visible. The text "WE COULD HAVE ASKED CHATGPT" is superimposed above the students. Image (and typos) generated by Nano Banana.
The revolt against automation: Capturing the frustration of students pushing back against educational institutions that rely on AI to replace human instructors. Image (and typos) generated by Nano Banana.

Source

The Guardian

Summary

Students on a coding apprenticeship at the University of Staffordshire say they were “robbed of knowledge” after discovering that large portions of their course materials—including slides, assignments and even voiceovers—were generated by AI. Despite university policies restricting students’ use of AI, staff appeared to rely heavily on AI-generated teaching content, leading to accusations of hypocrisy and declining trust in the programme. Students reported inconsistent editing, generic content and bizarre glitches such as a mid-video switch to a Spanish accent. Complaints brought little change, and although human lecturers delivered the final session, students argue the damage to their learning and career prospects has already been done. The case highlights rising tensions as universities increasingly adopt AI tools without transparent standards or safeguards.

Key Points

  • Staffordshire students discovered widespread use of AI-generated slides, tasks and videos.
  • AI usage contradicted strict policies prohibiting students from submitting AI-generated work.
  • Students reported generic content, inconsistent editing and AI voiceover glitches.
  • Repeated complaints yielded limited response; a human lecturer was added only at the end.
  • Students fear lost learning, reduced programme credibility and wasted time.

Keywords

URL

https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-large-part-by-ai-artificial-intelligence

Summary generated by ChatGPT 5


This is not the end but a beginning: Responding to “Something Wicked This Way Comes”

By Kerith George-Briant and Jack Hogan, Abertay University Dundee
Estimated reading time: 5 minutes
A conceptual illustration showing a digital roadmap splitting into two distinct, glowing paths, one labeled "Secure Assessment" and the other "Open Assessment." The background blends subtle academic motifs with swirling binary code, symbolizing the strategic integration of Generative AI into higher education assessment practices. Image (and typos) generated by Nano Banana.
Navigating the future: The “Two-Lane Approach” to Generative AI in assessment—balancing secure testing of threshold concepts (Lane 1) with open collaboration for developing AI literacy and critical thinking (Lane 2). Image (and typos) generated by Nano Banana.

O’Mahony’s provocatively titled “Something Wicked This Way Comes” blog outlined feelings we recognised from across the sector, which were that Generative AI (GenAI) tools have created unease, disruption, and uncertainty. In addition, we felt that GenAI provided huge opportunities, and as higher education has led and celebrated innovation in all disciplines over centuries, how this translated into our assessment practices intrigued us. 

At Abertay University, we’ve been exploring the “wicked problem” of whether to change teaching practices through a small-scale research project entitled “Lane Change Ahead: Artificial Intelligence’s Impact on Assessment Practices.” Our findings agree with O’Mahony’s observations that while GenAI does pose a challenge to academic integrity and traditional assessment models, it also offers opportunities for innovation, equity, and deeper learning, but we must respond thoughtfully and acknowledge that there are a variety of views on GenAI.

Academic Sensemaking

To understand colleagues’ perspectives and experiences, we applied Degn’s (2016) concept of academic sensemaking to understand how the colleagues we interviewed felt about GenAI. Findings showed that some assessment designers are decoupling, designing assessments that use GenAI outputs without requiring students to engage with the tools. Others are defiant or defeatist, allowing limited collaboration with GenAI tools but awarding a low percentage of the grade to that output. And some are strategic and optimistic, embracing GenAI as a tool for learning, creativity, and employability.

The responses show the reasons for unease are not just pedagogical; they’re deeply personal. GenAI challenges academic identity. Recognising this emotional response is essential to supporting staff if change is needed.

Detection and the Blurred Line

And change is needed, we would argue. Back in 2023, Perkin et al’s analysis of Turnitin’s AI detection capabilities revealed that while 91% of fully AI-generated submissions were flagged, the average detection within each paper was only 54.8% and only half of those flagged papers would have been referred for academic misconduct. Similar studies since then have continued to show the same types of results. And if detection isn’t possible, setting an absurd line as referred to by Corbin et al is ever more incongruous. There is no reliable way to indicate whether a student has stopped at the point of using AI for brainstorming or has engaged critically with AI paraphrased output. Some may read this and think that it’s game over, however if we embrace these challenges and adapt our approaches, we find solutions that are fit for purpose.

From Fear to Framework: The Two-Lane Approach

So, what is the solution? Our research explored whether the two-lane approach developed by Liu and Bridgeman would work at Abertay, where:

  • Lane 1: Secure Assessments would be conducted under controlled conditions to assure learning of threshold concepts and
  • Lane 2: Open Assessments would allow unrestricted use of GenAI.

Our case studies revealed three distinct modes of GenAI integration:

  • AI Output Only – Students critiqued AI-generated content without using GenAI themselves. This aligned with Lane 1 and a secure assessment method focusing on threshold concepts.
  • Limited Collaboration – Students used GenAI for planning and a minimal piece of output within a larger piece of assessment, which did not allow GenAI use. Students developed some critical thinking, but weren’t able to apply this learning to the whole assessment.
  • Unlimited Collaboration – Students were fully engaged with GenAI, with reflection and justification built into the assessment. Assessment designers reporting that students produced higher quality work and demonstrated enhanced critical thinking.

Each mode reflected a different balance of trust, control, and pedagogical intent. Interestingly, the AI Output pieces were secure and used to build AI literacy while meeting PSRB requirements, which asked for certain competencies and skills to be tested. The limited collaboration had an element of open assessment, but the percentage of the grade awarded to the output was minimal, and an absurd line was created by asking for no AI use in the larger part of the assessment. Finally, the assessments with unlimited collaboration were designed because those colleagues believed that writing without GenAI was not authentic, and they believed that employers would expect AI literacy skills, perhaps not misplaced based on the figure given in O’Mahony’s blog.

Reframing the Narrative: GenAI as Opportunity

We see the need to treat GenAI as a partner in education, one that encourages critical reflection. This will require carefully scaffolded teaching activities to develop the AI literacy of students and avoid cognitive offloading. Thankfully, ways forward have begun to appear, as noted in the work of Gerlick and Jose et al.

Conclusion: From Wicked to ‘Witch’ lane?

As educators, we have a choice. We can resist, decouple from GenAI or we can choose to lead the narrative strategically and optimistically. Although the pathway forward may not be a yellow brick road, we believe it’s worth considering which lane may suit us best. The key is that we don’t do this in isolation, but we take a pragmatic approach across our entire degree programme considering the level of study and the appropriate AI literacy skills.

GenAI acknowledgement:
Microsoft Copilot (https://copilot.microsoft.com) – used to create a draft blog from our research paper.

Kerith George-Briant

Learner Development Manager
Abertay University

Kerith George-Briant manages the Learner Development Service at Abertay. Her key interests are in building best practices in using AI, inclusivity, and accessibility.

Jack Hogan

Lecturer in Academic Practice
Abertay University

Jack Hogan works within the Abertay Learning Enhancement (AbLE) Academy as a Lecturer in Academic Practice. His research interests include student transitions and the first-year experience, microcredentials, skills development and employability. 


Keywords