Department of Education and Youth & Oide Technology in Education, October 2025
Summary
This national guidance document provides Irish schools with a framework for the safe, ethical, and effective use of artificial intelligence (AI), particularly generative AI (GenAI), in teaching, learning, and school leadership. It aims to support informed decision-making, enhance digital competence, and align AI use with Ireland’s Digital Strategy for Schools to 2027. The guidance recognises AI’s potential to support learning design, assessment, and communication while emphasising human oversight, teacher professionalism, and data protection.
It presents a balanced view of benefits and risks—AI can personalise learning and streamline administration but also raises issues of bias, misinformation, data privacy, and environmental impact. The report introduces a 4P framework—Purpose, Planning, Policies, and Practice—to guide schools in integrating AI responsibly. Teachers are encouraged to use GenAI as a creative aid, not a substitute, and to embed AI literacy in curricula. The document stresses the need for ethical awareness, alignment with GDPR and the EU AI Act (2024), and continuous policy updates as technology evolves.
Key Points
AI should support, not replace, human-led teaching and learning.
Responsible use requires human oversight, verification, and ethical reflection.
AI literacy for teachers, students, and leaders is central to safe adoption.
Compliance with GDPR and the EU AI Act ensures privacy and transparency.
GenAI tools must be age-appropriate and used within consent frameworks.
Bias, misinformation, and “hallucinations” demand critical human review.
The 4P Approach (Purpose, Planning, Policies, Practice) structures school-level implementation.
Environmental and wellbeing impacts must be considered in AI use.
Collaboration between the Department, Oide, and schools underpins future updates.
Guidance will be continuously revised to reflect evolving practice and research.
Conclusion
The guidance frames AI as a powerful but high-responsibility tool in education. By centring ethics, human agency, and data protection, schools can harness AI’s potential while safeguarding learners’ wellbeing, trust, and equity. Its iterative, values-led approach ensures Ireland’s education system remains adaptive, inclusive, and future-ready.
by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.
At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.
The Forces at Play
Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:
The need to equip students and staff with future-ready skills.
Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
National AI strategies that frame this as an essential area for investment.
The promise of personalised learning and workload reduction.
A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.
But there are also significant restraints:
Ongoing academic integrity concerns.
GDPR and data privacy ambiguity.
Patchy CPD and teacher digital confidence.
Digital equity and access challenges.
The energy cost of AI at scale.
Polarisation of educator opinion, and staff change fatigue.
The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.
Educator-Led AI Design
One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.
That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.
So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.
Built-in Pedagogical Skills
Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:
Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
Encouraging subject language use – reinforcing terminology in context.
Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
Supporting responsible AI use – modelling ethical engagement and critical AI literacy.
These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.
The Case for Copilot
We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.
The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.
And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.
Lessons Learned So Far
From our pilots, a few lessons stand out:
Moving to an in-tenancy model was essential for trust.
Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
AI integration is as much about culture, training, and mindset as it is about the underlying technology.
Looking Ahead
As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.
The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.
The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.
We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.
Jonathan Sansom
Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…
To truly prepare students for tomorrow’s workforce, higher education must foster an AI-positive culture. This involves embracing artificial intelligence not as a threat, but as a transformative tool that enhances skills and creates new opportunities in the evolving world of work. Image (and typos) generated by Nano Banana.
Source
Wonkhe
Summary
Alastair Robertson argues that higher education must move beyond piecemeal experimentation with generative AI and instead embed an “AI-positive culture” across teaching, learning, and institutional practice. While universities have made progress through policies such as the Russell Group’s principles on generative AI, most remain in an exploratory phase lacking strategic coherence. Robertson highlights the growing industry demand for AI literacy—especially foundational skills like prompting and evaluating outputs—contrasting this with limited student support in universities. He advocates co-creation among students, educators, and AI, where generative tools enhance learning personalisation, assessment, and data-driven insights. To succeed, universities must invest in technology, staff development, and policy frameworks that align AI with institutional values and foster innovation through strategic leadership and partnership with industry.
Key Points
Industry demand for AI literacy far outpaces current higher education provision.
Universities remain at an early stage of AI adoption, lacking coherent strategic approaches.
Co-creation between students, educators, and AI can deepen engagement and improve outcomes.
Embedding AI requires investment in infrastructure, training, and ethical policy alignment.
An AI-positive culture depends on leadership, collaboration, and flexibility to adapt as technology evolves.
Empowering educators for the future: A new AI and assessment training initiative is equipping lecturers with the knowledge and tools to effectively integrate artificial intelligence into their evaluation strategies, enhancing teaching and learning outcomes. Image (and typos) generated by Nano Banana.
Source
North-West University News (South Africa)
Summary
North-West University (NWU) has launched a large-scale professional development initiative to promote responsible use of artificial intelligence in teaching, learning, and assessment. The AI and Assessment course, supported by the Senior Deputy Vice-Chancellor for Teaching and Learning, the AI Hub, and the Centre for Teaching and Learning, awarded R500 Takealot vouchers to the first 800 lecturers who completed all eleven modules. Participants earned fifteen digital badges by achieving over 80 per cent in assessments and submitting a portfolio of evidence. The initiative underscores NWU’s commitment to digital transformation and capacity building. Lecturers praised the programme for strengthening their understanding of ethical and effective AI integration in higher education.
Key Points
800 NWU lecturers were incentivised to complete the AI and Assessment training course.
The programme awarded fifteen digital badges for verified completion and assessment success.
Leadership highlighted AI’s transformative role in teaching and learning innovation.
Participants reported improved confidence in using AI tools responsibly and ethically.
The initiative reinforces NWU’s institutional focus on digital capability and staff development.
This collection of essays explores how artificial intelligence—particularly generative AI (GenAI)—is reshaping the university sector across teaching, research, and administration. Contributors, including Dame Wendy Hall, Vinton Cerf, Rose Luckin, and others, argue that AI represents a profound structural shift rather than a passing technological wave. The report emphasises that universities must respond strategically, ethically, and holistically: developing AI literacy among staff and students, redesigning assessment, and embedding responsible innovation into governance and institutional strategy.
AI is portrayed as both a disruptive and creative force. It automates administrative processes, accelerates research, and transforms strategy-making, while simultaneously challenging ideas of authorship, assessment, and academic integrity. Luckin and others call for universities to foster uniquely human capacities—critical thinking, creativity, emotional intelligence, and metacognition—so that AI augments rather than replaces human intellect. Across the essays, there is strong consensus that AI literacy, ethical governance, and institutional agility are vital if universities are to remain credible and relevant in the AI era.
Key Points
GenAI is reshaping all aspects of higher education teaching and learning.
AI literacy must be built into curricula, staff training, and institutional culture.
Faculty should use GenAI to enhance creativity and connection, not replace teaching.
Clear, flexible policies are needed for responsible and ethical AI use.
Institutions must prioritise equity, inclusion, and closing digital divides.
Ongoing professional development in AI is essential for staff and administrators.
Collaboration across institutions and with industry accelerates responsible adoption.
Assessment and pedagogy must evolve to reflect AI’s role in learning.
GenAI governance should balance innovation with accountability and transparency.
Shared toolkits and global practice networks can scale learning and implementation.
Conclusion
The Action Plan positions GenAI as both a challenge and a catalyst for renewal in higher education. Institutions that foster literacy, ethics, and innovation will not only adapt but thrive. Teaching with AI is framed as a collective, values-led enterprise—one that keeps human connection, creativity, and critical thinking at the centre of the learning experience.