HEA – Generative AI in Higher Education Teaching & Learning: Policy Framework


Source

O’Sullivan, James, Colin Lowry, Ross Woods & Tim Conlon. Generative AI in Higher Education Teaching &
Learning: Policy Framework. Higher Education Authority, 2025. DOI: 10.82110/073e-hg66.

Summary

This policy framework provides a national, values-based approach to guiding the adoption of generative artificial intelligence (GenAI) in teaching and learning across Irish higher education institutions. Rather than prescribing uniform rules, it establishes a shared set of principles to support informed, ethical, and pedagogically sound decision-making. The framework recognises GenAI as a structural change to higher education—particularly to learning design, assessment, and academic integrity—requiring coordinated institutional and sector-level responses rather than ad hoc or individual initiatives.

Focused explicitly on teaching and learning, the framework foregrounds five core principles: academic integrity and transparency; equity and inclusion; critical engagement, human oversight, and AI literacy; privacy and data governance; and sustainable pedagogy. It emphasises that GenAI should neither be uncritically embraced nor categorically prohibited. Instead, institutions are encouraged to adopt proportionate, evidence-informed approaches that preserve human judgement, ensure fairness, protect student data, and align AI use with the public mission of higher education. The document also outlines how these principles can be operationalised through governance, assessment redesign, staff development, and continuous sector learning.

Key Points

  • The framework offers a shared national reference point rather than prescriptive rules.
  • GenAI is treated as a systemic pedagogical challenge, not a temporary disruption.
  • Academic integrity depends on transparency, accountability, and visible authorship.
  • Equity and inclusion must be designed into AI adoption from the outset.
  • Human oversight and critical engagement remain central to learning and assessment.
  • AI literacy is positioned as a core capability for staff and students.
  • Privacy, data protection, and institutional data sovereignty are essential.
  • Assessment practices must evolve beyond reliance on traditional written outputs.
  • Sustainability includes both environmental impact and long-term educational quality.
  • Ongoing monitoring and sector-wide learning are critical to responsible adoption.

Conclusion

The HEA Policy Framework positions generative AI as neither a threat to be resisted nor a solution to be uncritically adopted. By grounding AI integration in shared academic values, ethical governance, and pedagogical purpose, it provides Irish higher education with a coherent foundation for navigating AI-enabled change while safeguarding trust, equity, and educational integrity.

Keywords

URL

https://hea.ie/2025/12/22/hea-publishes-national-policy-framework-on-generative-ai-in-teaching-and-learning/

Summary generated by ChatGPT 5.2


Building the Manifesto: How We Got Here and What Comes Next

By Ken McCarthy
Estimated reading time: 6 minutes
A minimalist illustration featuring the silhouette of a person standing and gazing toward a horizon line formed by soft, glowing digital patterns and abstract light streams. The scene blends naturalistic contemplation with modern technology, symbolizing human agency in shaping the future of AI against a clean, neutral background. Image (and typos) generated by Nano Banana.
Looking ahead: As we navigate the complexities of generative AI in higher education, it is crucial to remember that technology does not dictate our path. Through ethical inquiry and reimagined learning, the horizon is still ours to shape. Image (and typos) generated by Nano Banana.

When Hazel and I started working with GenAI in higher education, we did not set out to write a manifesto. We were simply trying to make sense of a fast-moving landscape. GenAI arrived quickly, finding its way into classrooms and prompting new questions about academic integrity and AI integration long before we had time to work through what it all meant. Students were experimenting earlier than many staff felt prepared for. Policies were still forming.

What eventually became the Manifesto for Generative AI in Higher Education began as our attempt to capture our thoughts. Not a policy, not a fully fledged framework, not a strategy. Just a way to hold the questions, principles, and tensions that kept surfacing. It took shape through notes gathered in margins, comments shared after workshops, ideas exchanged in meetings, and moments in teaching sessions that stayed with us long after they ended. It was never a single project. It gathered itself slowly.

From the start, we wanted it to be a short read that opened the door to big ideas. The sector already has plenty of documents that run to seventy or eighty pages. Many of them are helpful, but they can be difficult to take into a team meeting or a coffee break. We wanted something different. Something that could be read in ten minutes, but still spark thought and conversation. A series of concise statements that felt recognisable to anyone grappling with the challenges and possibilities of GenAI. A document that holds principles without pretending to offer every answer. We took inspiration from the Edinburgh Manifesto for Teaching Online, which reminded us that a series of short, honest statements can travel further than a long policy ever will.

The manifesto is a living reflection. It recognises that we stand at a threshold between what learning has been and what it might become. GenAI brings possibility and uncertainty together, and our role is to respond with imagination and integrity to keep learning a deeply human act .

Three themes shaped the work

As the ideas settled, three themes emerged that helped give structure to the thirty statements.

Rethinking teaching and learning responds to an age of abundance. Information is everywhere. The task of teaching shifts toward helping students interpret, critique, and question rather than collect. Inquiry becomes central. Several statements address this shift, emphasising that GenAI does not replace thinking. It reveals the cost of not thinking. They point toward assessment design that rewards insight over detection and remind us that curiosity drives learning in ways that completion never can .

Responsibility, ethics, and power acknowledges that GenAI is shaped by datasets, values, and omissions. It is not neutral. This theme stresses transparency, ethical leadership, and the continuing importance of academic judgement. It challenges institutions to act with care, not just efficiency. It highlights that prompting is an academic skill, not a technical trick, and that GenAI looks different in every discipline, which means no single approach will fit all contexts.

Imagination, humanity, and the future encourages us to look beyond the disruption of the present moment and ask what we want higher education to become. It holds inclusion as a requirement rather than an aspiration. It names sustainability as a learning outcome. It insists that ethics belong at the beginning of design processes. It ends with the reminder that the horizon is still ours to shape and that the future classroom is a conversation where people and systems learn in dialogue without losing sight of human purpose

How it came together

The writing process was iterative. Some statements arrived whole. Others needed several attempts. We removed the ones that tried to do too much and kept the ones that stayed clear in the mind after a few days. We read them aloud to test the rhythm. The text only settled into its final shape once we noticed the three themes forming naturally.

The feedback from our reviewers, Tom Farrelly and Sue Beckingham, strengthened the final version. Their comments helped us tighten the language and balance the tone. The manifesto may have two named authors, but it is built from many voices.

Early responses from the sector

In the short time since the manifesto was released, the webpage has been visited by more than 750 people from 40 countries. For a document that began as a few lines in a notebook, this has been encouraging. It suggests the concerns and questions we tried to capture are widely shared. More importantly, it signals that there is an appetite for a conversation that is thoughtful, practical, and honest about the pace of change.

This early engagement reinforces something we felt from the start. The manifesto is only the beginning. It is not a destination. It is a point of departure for a shared journey.

Next steps: a book of voices across the sector

To continue that journey, we are developing a book of short essays and chapters that respond to the manifesto. Each contribution will explore a statement within the document. The chapters will be around 1,000 words. They can draw on practice, research, disciplinary experience, student partnership, leadership, policy, or critique. They can support, question, or challenge the manifesto. The aim is not agreement. The aim is insight.

We want to bring together educators, librarians, technologists, academic developers, researchers, students, and professional staff. The only requirement is that contributors have something to say about how GenAI is affecting their work, their discipline, or their students.

An invitation to join us

If you would like to contribute, we would welcome your expression of interest. You do not need specialist expertise in AI. You only need a perspective that might help the sector move forward with clarity and confidence.

Your chapter should reflect on a single statement. It could highlight emerging practice or ask questions that do not yet have answers. It could bring a disciplinary lens or a broader institutional one.

The manifesto was built from shared conversations. The next stage will be shaped by an even wider community. If this work is going to stay alive, it needs many hands.

The horizon is still ours to shape. If you would like to help shape it with us, please submit an expression of interest through the following link: https://forms.gle/fGTR9tkZrK1EeoLH8

Ken McCarthy

Head of Centre for Academic Practice
South East Technological University

As Head of the Centre for Academic Practice at SETU, I lead strategic initiatives to enhance teaching, learning, and assessment across the university. I work collaboratively with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education.
I’m passionate about fostering academic excellence through professional development, curriculum design, and scholarship of teaching and learning. I also support and drive innovation in digital pedagogy and learning spaces.

Keywords


Guidance on Artificial Intelligence in Schools


Source

Department of Education and Youth & Oide Technology in Education, October 2025

Summary

This national guidance document provides Irish schools with a framework for the safe, ethical, and effective use of artificial intelligence (AI), particularly generative AI (GenAI), in teaching, learning, and school leadership. It aims to support informed decision-making, enhance digital competence, and align AI use with Ireland’s Digital Strategy for Schools to 2027. The guidance recognises AI’s potential to support learning design, assessment, and communication while emphasising human oversight, teacher professionalism, and data protection.

It presents a balanced view of benefits and risks—AI can personalise learning and streamline administration but also raises issues of bias, misinformation, data privacy, and environmental impact. The report introduces a 4P framework—Purpose, Planning, Policies, and Practice—to guide schools in integrating AI responsibly. Teachers are encouraged to use GenAI as a creative aid, not a substitute, and to embed AI literacy in curricula. The document stresses the need for ethical awareness, alignment with GDPR and the EU AI Act (2024), and continuous policy updates as technology evolves.

Key Points

  • AI should support, not replace, human-led teaching and learning.
  • Responsible use requires human oversight, verification, and ethical reflection.
  • AI literacy for teachers, students, and leaders is central to safe adoption.
  • Compliance with GDPR and the EU AI Act ensures privacy and transparency.
  • GenAI tools must be age-appropriate and used within consent frameworks.
  • Bias, misinformation, and “hallucinations” demand critical human review.
  • The 4P Approach (Purpose, Planning, Policies, Practice) structures school-level implementation.
  • Environmental and wellbeing impacts must be considered in AI use.
  • Collaboration between the Department, Oide, and schools underpins future updates.
  • Guidance will be continuously revised to reflect evolving practice and research.

Conclusion

The guidance frames AI as a powerful but high-responsibility tool in education. By centring ethics, human agency, and data protection, schools can harness AI’s potential while safeguarding learners’ wellbeing, trust, and equity. Its iterative, values-led approach ensures Ireland’s education system remains adaptive, inclusive, and future-ready.

Keywords

URL

https://assets.gov.ie/static/documents/dee23cad/Guidance_on_Artificial_Intelligence_in_Schools_25.pdf

Summary generated by ChatGPT 5


Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Estimated reading time: 5 minutes
A laptop displays a "Microsoft Copilot" interface with two main sections: "Force Analysis: Opportunities vs. Challenges" showing a line graph, and "Pedagogical Copilot Agent" with an icon representing a graduation cap, books, and other educational symbols. In the background, a blurred image of students in a secondary school classroom is visible. Image (and typos) generated by Nano Banana.
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.

At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.

The Forces at Play

Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:

  • The need to equip students and staff with future-ready skills.
  • Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
  • National AI strategies that frame this as an essential area for investment.
  • The promise of personalised learning and workload reduction.
  • A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.

But there are also significant restraints:

  • Ongoing academic integrity concerns.
  • GDPR and data privacy ambiguity.
  • Patchy CPD and teacher digital confidence.
  • Digital equity and access challenges.
  • The energy cost of AI at scale.
  • Polarisation of educator opinion, and staff change fatigue.

The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.

Educator-Led AI Design

One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.

That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.

So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.

Built-in Pedagogical Skills

Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:

  • Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
  • Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
  • Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
  • Encouraging subject language use – reinforcing terminology in context.
  • Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
  • Supporting responsible AI use – modelling ethical engagement and critical AI literacy.

These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.

The Case for Copilot

We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.

The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.

And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.

Lessons Learned So Far

From our pilots, a few lessons stand out:

  • Moving to an in-tenancy model was essential for trust.
  • Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
  • Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
  • AI integration is as much about culture, training, and mindset as it is about the underlying technology.

Looking Ahead

As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.

The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.

The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.

We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.

Jonathan Sansom

Director of Digital Strategy,
Hills Road Sixth Form College, Cambridge

Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…


Software / Services Used

Microsoft Copilot Studiohttps://www.microsoft.com/en-us/microsoft-365-copilot/microsoft-copilot-studio

Keywords


Preparing Students for the World of Work Means Embracing an AI-Positive Culture


A vibrant, modern open-plan office setting, bustling with young professionals and students collaborating. In the foreground, a diverse group of five young adults is gathered around a table, intensely focused on glowing, interactive holographic projections emanating from the table surface, displaying data and digital interfaces. A friendly, white humanoid robot stands nearby, observing or assisting. In the background, other individuals are working at desks with computers, and various screens display data. Overlaid text reads "AI-POSITIVE WORK CULTURE: PREPARING FOR TOMORROW'S JOBS." Image (and typos) generated by Nano Banana.
To truly prepare students for tomorrow’s workforce, higher education must foster an AI-positive culture. This involves embracing artificial intelligence not as a threat, but as a transformative tool that enhances skills and creates new opportunities in the evolving world of work. Image (and typos) generated by Nano Banana.

Source

Wonkhe

Summary

Alastair Robertson argues that higher education must move beyond piecemeal experimentation with generative AI and instead embed an “AI-positive culture” across teaching, learning, and institutional practice. While universities have made progress through policies such as the Russell Group’s principles on generative AI, most remain in an exploratory phase lacking strategic coherence. Robertson highlights the growing industry demand for AI literacy—especially foundational skills like prompting and evaluating outputs—contrasting this with limited student support in universities. He advocates co-creation among students, educators, and AI, where generative tools enhance learning personalisation, assessment, and data-driven insights. To succeed, universities must invest in technology, staff development, and policy frameworks that align AI with institutional values and foster innovation through strategic leadership and partnership with industry.

Key Points

  • Industry demand for AI literacy far outpaces current higher education provision.
  • Universities remain at an early stage of AI adoption, lacking coherent strategic approaches.
  • Co-creation between students, educators, and AI can deepen engagement and improve outcomes.
  • Embedding AI requires investment in infrastructure, training, and ethical policy alignment.
  • An AI-positive culture depends on leadership, collaboration, and flexibility to adapt as technology evolves.

Keywords

URL

https://wonkhe.com/blogs/preparing-students-for-the-world-of-work-means-embracing-an-ai-positive-culture/

Summary generated by ChatGPT 5