Building the Manifesto: How We Got Here and What Comes Next

By Ken McCarthy
Estimated reading time: 6 minutes
A minimalist illustration featuring the silhouette of a person standing and gazing toward a horizon line formed by soft, glowing digital patterns and abstract light streams. The scene blends naturalistic contemplation with modern technology, symbolizing human agency in shaping the future of AI against a clean, neutral background. Image (and typos) generated by Nano Banana.
Looking ahead: As we navigate the complexities of generative AI in higher education, it is crucial to remember that technology does not dictate our path. Through ethical inquiry and reimagined learning, the horizon is still ours to shape. Image (and typos) generated by Nano Banana.

When Hazel and I started working with GenAI in higher education, we did not set out to write a manifesto. We were simply trying to make sense of a fast-moving landscape. GenAI arrived quickly, finding its way into classrooms and prompting new questions about academic integrity and AI integration long before we had time to work through what it all meant. Students were experimenting earlier than many staff felt prepared for. Policies were still forming.

What eventually became the Manifesto for Generative AI in Higher Education began as our attempt to capture our thoughts. Not a policy, not a fully fledged framework, not a strategy. Just a way to hold the questions, principles, and tensions that kept surfacing. It took shape through notes gathered in margins, comments shared after workshops, ideas exchanged in meetings, and moments in teaching sessions that stayed with us long after they ended. It was never a single project. It gathered itself slowly.

From the start, we wanted it to be a short read that opened the door to big ideas. The sector already has plenty of documents that run to seventy or eighty pages. Many of them are helpful, but they can be difficult to take into a team meeting or a coffee break. We wanted something different. Something that could be read in ten minutes, but still spark thought and conversation. A series of concise statements that felt recognisable to anyone grappling with the challenges and possibilities of GenAI. A document that holds principles without pretending to offer every answer. We took inspiration from the Edinburgh Manifesto for Teaching Online, which reminded us that a series of short, honest statements can travel further than a long policy ever will.

The manifesto is a living reflection. It recognises that we stand at a threshold between what learning has been and what it might become. GenAI brings possibility and uncertainty together, and our role is to respond with imagination and integrity to keep learning a deeply human act .

Three themes shaped the work

As the ideas settled, three themes emerged that helped give structure to the thirty statements.

Rethinking teaching and learning responds to an age of abundance. Information is everywhere. The task of teaching shifts toward helping students interpret, critique, and question rather than collect. Inquiry becomes central. Several statements address this shift, emphasising that GenAI does not replace thinking. It reveals the cost of not thinking. They point toward assessment design that rewards insight over detection and remind us that curiosity drives learning in ways that completion never can .

Responsibility, ethics, and power acknowledges that GenAI is shaped by datasets, values, and omissions. It is not neutral. This theme stresses transparency, ethical leadership, and the continuing importance of academic judgement. It challenges institutions to act with care, not just efficiency. It highlights that prompting is an academic skill, not a technical trick, and that GenAI looks different in every discipline, which means no single approach will fit all contexts.

Imagination, humanity, and the future encourages us to look beyond the disruption of the present moment and ask what we want higher education to become. It holds inclusion as a requirement rather than an aspiration. It names sustainability as a learning outcome. It insists that ethics belong at the beginning of design processes. It ends with the reminder that the horizon is still ours to shape and that the future classroom is a conversation where people and systems learn in dialogue without losing sight of human purpose

How it came together

The writing process was iterative. Some statements arrived whole. Others needed several attempts. We removed the ones that tried to do too much and kept the ones that stayed clear in the mind after a few days. We read them aloud to test the rhythm. The text only settled into its final shape once we noticed the three themes forming naturally.

The feedback from our reviewers, Tom Farrelly and Sue Beckingham, strengthened the final version. Their comments helped us tighten the language and balance the tone. The manifesto may have two named authors, but it is built from many voices.

Early responses from the sector

In the short time since the manifesto was released, the webpage has been visited by more than 750 people from 40 countries. For a document that began as a few lines in a notebook, this has been encouraging. It suggests the concerns and questions we tried to capture are widely shared. More importantly, it signals that there is an appetite for a conversation that is thoughtful, practical, and honest about the pace of change.

This early engagement reinforces something we felt from the start. The manifesto is only the beginning. It is not a destination. It is a point of departure for a shared journey.

Next steps: a book of voices across the sector

To continue that journey, we are developing a book of short essays and chapters that respond to the manifesto. Each contribution will explore a statement within the document. The chapters will be around 1,000 words. They can draw on practice, research, disciplinary experience, student partnership, leadership, policy, or critique. They can support, question, or challenge the manifesto. The aim is not agreement. The aim is insight.

We want to bring together educators, librarians, technologists, academic developers, researchers, students, and professional staff. The only requirement is that contributors have something to say about how GenAI is affecting their work, their discipline, or their students.

An invitation to join us

If you would like to contribute, we would welcome your expression of interest. You do not need specialist expertise in AI. You only need a perspective that might help the sector move forward with clarity and confidence.

Your chapter should reflect on a single statement. It could highlight emerging practice or ask questions that do not yet have answers. It could bring a disciplinary lens or a broader institutional one.

The manifesto was built from shared conversations. The next stage will be shaped by an even wider community. If this work is going to stay alive, it needs many hands.

The horizon is still ours to shape. If you would like to help shape it with us, please submit an expression of interest through the following link: https://forms.gle/fGTR9tkZrK1EeoLH8

Ken McCarthy

Head of Centre for Academic Practice
South East Technological University

As Head of the Centre for Academic Practice at SETU, I lead strategic initiatives to enhance teaching, learning, and assessment across the university. I work collaboratively with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education.
I’m passionate about fostering academic excellence through professional development, curriculum design, and scholarship of teaching and learning. I also support and drive innovation in digital pedagogy and learning spaces.

Keywords


Artificial intelligence guidelines for teachers and students ‘notably absent’, report finds


In a dimly lit, traditional lecture hall, a frustrated speaker stands at a podium addressing an audience. Behind him, a large screen prominently displays "AI GUIDELINES IN EDUCATION" with repeating red text emphasizing "NOTABLY ABSENT: GUIDELINES PENDING" and crossed-out sections for "Teacher Guidelines" and "Student Guidelines." A whiteboard to the side has a hand-drawn sad face next to "AI Policy?". Image (and typos) generated by Nano Banana.
A recent report has highlighted a significant void in modern education: the “notable absence” of clear artificial intelligence guidelines for both teachers and students. This image captures the frustration and confusion surrounding this lack of direction, underscoring the urgent need for comprehensive policies to navigate the integration of AI responsibly within academic settings. Image (and typos) generated by Nano Banana.

Source

Irish Examiner

Summary

A new report by ESRI (Economic and Social Research Institute) highlights a significant policy gap: Irish secondary schools largely lack up-to-date acceptable use policies (AUPs) that address AI. Among 51 large schools surveyed, only six had current policies, and none included detailed guidance on AI’s use in teaching or learning. The Department of Education says it’s finalising AI guidance to address risks, opportunities, and responsible use. The absence of clear, central policy leaves individual schools and teachers making ad hoc decisions.

Key Points

  • Only 6 of 51 schools had updated acceptable use policies that could construe AI governance.
  • AI-specific guidelines are “notably absent” in existing school policies.
  • Schools are left to decide individually how (or whether) to integrate AI in learning without shared framework.
  • The Department of Education expects to issue formal guidance imminently, supported by resources via the AI Hub and Oide TiE.
  • Policymaking lag is highlighted as a disconnect between fast technology change and slow institutional response.

Keywords

URL

https://www.irishexaminer.com/news/arid-41715942.html

Summary generated by ChatGPT 5


Sometimes We Resist AI for Good Reasons


In a classic, wood-paneled library, five serious-looking professionals (three female, two male) stand behind a long wooden table laden with books. A large, glowing red holographic screen hovers above the table, displaying 'AI: UNETHICAL BIAS - DATA SECURITY - LOSS THE CRITICAL THOUGHT' and icons representing ethical concerns. The scene conveys a thoughtful resistance to AI based on justified concerns. Generated by Nano Banana.
In an era where AI is rapidly integrating into all aspects of life, this image powerfully illustrates that ‘sometimes we resist AI for good reasons.’ It highlights critical concerns such as unethical biases, data security vulnerabilities, and the potential erosion of critical thought, underscoring the importance of cautious and principled engagement with artificial intelligence. Image (and typos) generated by Nano Banana.

Source

The Chronicle of Higher Education

Summary

Kevin Gannon argues that in crafting AI policies for universities, it’s vital to include voices critical of generative AI, not just technophiles. He warns that the rush to adopt AI (for grading, lesson planning, etc.) often ignores deeper concerns about academic values, workloads, and epistemic integrity. Institutions repeatedly issue policies that are outdated almost immediately, and students feel caught in the gap between policy and practice. Gannon’s call: resist the narrative of inevitability, listen to sceptics, and create policies rooted in local context, shared governance, and respect for institutional culture.

Key Points

  • Many universities struggle to keep AI policies updated in face of fast technical change.
  • Students often receive blurry or conflicting guidance on when AI use is allowed.
  • The push for AI adoption is framed as inevitable, marginalising critics who raise valid concerns.
  • Local context matters deeply — uniform policies rarely do justice to varied departmental needs.
  • Including dissenting voices improves policy legitimacy and avoids blind spots.

Keywords

URL

https://www.chronicle.com/article/sometimes-we-resist-ai-for-good-reasons

Summary generated by ChatGPT 5


As AI tools reshape education, schools struggle with how to draw the line on cheating


A group of educators and administrators in business attire are seated around a modern conference table, intensely focused on laptops. A glowing red line, fluctuating like a waveform, runs down the center of the table, separating 'AUTHORIZED AI USE' from 'ACADEMIC MISCONDUCT'. A large holographic screen above displays the headline 'As AI tools reshape education, schools struggle with how to how to draw the line on cheeting'. The scene visualizes the challenge of defining ethical boundaries for AI in academia. Generated by Nano Banana.
As AI tools become ubiquitous in education, schools are grappling with the complex and often ambiguous task of defining the line between legitimate AI assistance and academic misconduct. This image captures the intensity of discussions among educators striving to establish clear policies and maintain academic integrity in an evolving technological landscape. Image (and typos) generated by Nano Banana.

Source

ABC News

Summary

AI is now so widespread among students that traditional assessments (take‑home essays, homework) are often considered invitations to ‘cheat.’ Teachers are responding by shifting to in‑class writing, using lockdown browsers, blocking device access, redesigning assignments, and clarifying AI policies. But confusion remains: students don’t always have clarity on what’s allowed, and teaching methods lag behind the technology. There’s growing consensus that blanket bans are not enough — what matters more is teaching students how to use AI responsibly, with transparent guidelines that protect academic integrity without stifling learning.

Key Points

  • High prevalence of student use of AI is challenging existing norms around homework & take‑home essays.
  • Teachers increasingly require in‑class work, verbal assessments, or technology controls (lockdown browser).
  • Students often unsure where the line is: what counts as cheating isn’t always clear.
  • Institutions & faculty are drafting clearer policies and guidelines; bans alone are unviable.
  • Equity issues emerge: AI access/use varies, raising fairness concerns.

Keywords

URL

https://abcnews.go.com/US/wireStory/ai-tools-reshape-education-schools-struggle-draw-line-125501970

Summary generated by ChatGPT 5


The Question All Colleges Should Ask Themselves About AI


In a grand, traditional university library, a glowing holographic question mark formed from digital circuitry. Inside the question mark, the text reads "WHAT IS OUR PURPOSE IN THE AGE OF AI?". Image (and typos) generated by Nano Banana.
As Artificial Intelligence reshapes industries and societies, colleges and universities are confronted with a fundamental challenge: redefining their core purpose. This image powerfully visualises the critical question that all academic institutions must now address regarding their relevance, value, and role in an increasingly AI-driven world. Image (and typos) generated by Nano Banana.

Source

The Atlantic

Summary

AI is now deeply embedded in college life — often unauthorised — and colleges are struggling with responses. Many institutions fail to enforce coherent, system‑wide policies, risking degradation of learning, peer relationships, and integrity of scholarship. The article suggests radical measures like tech/device bans or stronger honour codes to defend educational values, while teaching responsible AI use where appropriate. Colleges must choose whether to integrate AI or resist it, guided by their core values.

Key Points

  • Unauthorised AI use undermines learning and fairness.
  • Removes opportunities for deep thinking and writing.
  • Institutional goals like originality are compromised by AI’s fabrications and IP issues.
  • Proposals: banning devices, honour codes, strict penalties.
  • Colleges must clarify values and boundaries for AI use.

Keywords

URL

https://www.theatlantic.com/culture/archive/2025/09/ai-colleges-universities-solution/684160/

Summary generated by ChatGPT 5