HEA – Generative AI in Higher Education Teaching & Learning: Policy Framework


Source

O’Sullivan, James, Colin Lowry, Ross Woods & Tim Conlon. Generative AI in Higher Education Teaching &
Learning: Policy Framework. Higher Education Authority, 2025. DOI: 10.82110/073e-hg66.

Summary

This policy framework provides a national, values-based approach to guiding the adoption of generative artificial intelligence (GenAI) in teaching and learning across Irish higher education institutions. Rather than prescribing uniform rules, it establishes a shared set of principles to support informed, ethical, and pedagogically sound decision-making. The framework recognises GenAI as a structural change to higher education—particularly to learning design, assessment, and academic integrity—requiring coordinated institutional and sector-level responses rather than ad hoc or individual initiatives.

Focused explicitly on teaching and learning, the framework foregrounds five core principles: academic integrity and transparency; equity and inclusion; critical engagement, human oversight, and AI literacy; privacy and data governance; and sustainable pedagogy. It emphasises that GenAI should neither be uncritically embraced nor categorically prohibited. Instead, institutions are encouraged to adopt proportionate, evidence-informed approaches that preserve human judgement, ensure fairness, protect student data, and align AI use with the public mission of higher education. The document also outlines how these principles can be operationalised through governance, assessment redesign, staff development, and continuous sector learning.

Key Points

  • The framework offers a shared national reference point rather than prescriptive rules.
  • GenAI is treated as a systemic pedagogical challenge, not a temporary disruption.
  • Academic integrity depends on transparency, accountability, and visible authorship.
  • Equity and inclusion must be designed into AI adoption from the outset.
  • Human oversight and critical engagement remain central to learning and assessment.
  • AI literacy is positioned as a core capability for staff and students.
  • Privacy, data protection, and institutional data sovereignty are essential.
  • Assessment practices must evolve beyond reliance on traditional written outputs.
  • Sustainability includes both environmental impact and long-term educational quality.
  • Ongoing monitoring and sector-wide learning are critical to responsible adoption.

Conclusion

The HEA Policy Framework positions generative AI as neither a threat to be resisted nor a solution to be uncritically adopted. By grounding AI integration in shared academic values, ethical governance, and pedagogical purpose, it provides Irish higher education with a coherent foundation for navigating AI-enabled change while safeguarding trust, equity, and educational integrity.

Keywords

URL

https://hea.ie/2025/12/22/hea-publishes-national-policy-framework-on-generative-ai-in-teaching-and-learning/

Summary generated by ChatGPT 5.2


Australian Framework for Artificial Intelligence in Higher Education


Source

Lodge, J. M., Bower, M., Gulson, K., Henderson, M., Slade, C., & Southgate, E. (2025). Australian Centre for Student Equity and Success, Curtin University

Summary

This framework provides a national roadmap for the ethical, equitable, and effective use of artificial intelligence (AI)—including generative and agentic AI—across Australian higher education. It recognises both the transformative potential and inherent risks of AI, calling for governance structures, policies, and pedagogies that prioritise human flourishing, academic integrity, and cultural inclusion. The framework builds on the Australian Framework for Generative AI in Schools but is tailored to the unique demands of higher education: research integrity, advanced scholarship, and professional formation in AI-enhanced contexts.

Centred around seven guiding principles—human-centred education, inclusive implementation, ethical decision-making, Indigenous knowledges, ethical development, adaptive skills, and evidence-informed innovation—the framework links directly to the Higher Education Standards Framework (Threshold Standards) and the UN Sustainable Development Goals. It emphasises AI literacy, Indigenous data sovereignty, environmental sustainability, and the co-design of equitable AI systems. Implementation guidance includes governance structures, staff training, assessment redesign, cross-institutional collaboration, and a coordinated national research agenda.

Key Points

  • AI in higher education must remain human-centred and ethically governed.
  • Generative and agentic AI should support, not replace, human teaching and scholarship.
  • Institutional AI frameworks must align with equity, inclusion, and sustainability goals.
  • Indigenous knowledge systems and data sovereignty are integral to AI ethics.
  • AI policies should be co-designed with students, staff, and First Nations leaders.
  • Governance requires transparency, fairness, accountability, and contestability.
  • Staff professional learning should address ethical, cultural, and environmental dimensions.
  • Pedagogical design must cultivate adaptive, critical, and reflective learning skills.
  • Sector-wide collaboration and shared national resources are key to sustainability.
  • Continuous evaluation ensures AI enhances educational quality and social good.

Conclusion

The framework positions Australia’s higher education sector to lead in responsible AI adoption. By embedding ethical, equitable, and evidence-based practices, it ensures that AI integration strengthens—not undermines—human expertise, cultural integrity, and educational purpose. It reaffirms universities as stewards of both knowledge and justice in an AI-shaped future.

Keywords

URL

https://www.acses.edu.au/publication/australian-framework-for-artificial-intelligence-in-higher-education/

Summary generated by ChatGPT 5.1


Teaching the Future: How Tomorrow’s Music Educators Are Reimagining Pedagogy

By James Hanley, Oliver Harris, Caitlin Walsh, Sam Blanch, Dakota Venn-Keane, Eve Whelan, Luke Kiely, Jake Power, and Alex Rockett Power in collaboration with ChatGPT and Dr Hazel Farrell
Estimated reading time: 7 minutes
A group of eight music students from the BA (Hons) Music program at SETU are pictured in a futuristic, neon-lit "futureville" setting. They are gathered around a piano, which glows with digital accents, against a backdrop of towering, illuminated cityscapes and flowing data streams.
The future is now! BA (Hons) Music students from SETU in a vibrant “futureville” setting, blending the timeless artistry of music with cutting-edge technological imagination.

In recognition of how deeply AI is becoming embedded in the educational landscape, a co-created assignment exploring possibilities for music educators was considered timely. As part of the Year 3 Music Pedagogy module at South East Technological University (SETU), students were tasked with designing a learning activity that meaningfully integrated AI into the process. They were asked not only to create a resource but to trial it, evaluate it, and critically reflect on how AI shaped the learning experience. A wide range of free AI tools were used, including ChatGPT, SUNO, Audacity, Napkin, Google Gemini, Notebook LM, and Eleven Labs, and each student focused on a teaching resource that resonated with them, such as interactive tools, infographics, lesson plans, and basic websites.

Across their written and audio reflections, a rich picture emerged: AI is powerful, fallible, inspiring, frustrating, and always dependent on thoughtful human oversight. This blog is based on their reflections which reveal a generation of educators learning not just how to use AI, but why it must be used with care.

Expanding Pedagogical Possibilities

Students consistently highlighted AI’s ability to accelerate creativity and resource development. Several noted that AI made it easier to create visually engaging materials, such as diagrams, colourful flashcards, or child‑friendly graphics. One student reflected, “With just a click of the mouse, anyone can generate their own diagrams and flash cards for learning,” emphasising how AI allowed them to design tools they would otherwise struggle to produce manually.

Others explored AI‑generated musical content. One student used a sight‑reading generator to trial melodic exercises, observing that while the exercises themselves were well‑structured, “the feedback was exceedingly generous.” Another used ChatGPT to build a lesson structure, describing the process as “seamless and streamlined,” though still requiring adjustments to ensure accuracy and alignment with Irish terminology. One reflection explained, “AI can create an instrumental track in a completely different style, but it still needs human balance through EQ, compression, and reverb to make it sound natural.” This demonstrated how AI and hands-on editing can work together to develop both musical and technical skills.

An interactive rhythm game for children was designed by another student who used ChatGPT to progressively refine layout, colour schemes, difficulty levels, and supportive messages such as “Nice timing!” and “Perfect rhythm!” They described an iterative process requiring over 30 versions as the model continuously adapted to new instructions. The result was a working single‑player prototype that demonstrated both the creative potential and technical limits of AI‑assisted design.

The Teacher’s Role Remains Central

Across all reflections, students expressed strong awareness that AI cannot replace fundamental aspects of music teaching. Human judgment, accuracy, musical nuance, and relational connection were seen as irreplaceable. One student wrote that although AI can generate ideas and frameworks, “the underlying educational thinking remained a human responsibility.” Another reflected on voice‑training tools, noting that constant pitch guidance from AI could become “a crutch,” misleading students into believing they were singing correctly even when not. Many recognised that while AI can speed up creative processes, the emotional control, balance, and overall musical feel must still come from human input. One reflection put it simply: “AI gives you the idea, but people give it life.”

There was also a deep recognition of the social dimension of teaching. As one student put it, the “teacher–student relationship bears too much of an impact” to be substituted by automated tools. Many emphasised that confidence‑building, emotional support, and adaptive feedback come from real educators, not algorithms.

Challenges, Risks, and Ethical Considerations

The assignment surfaced several important realisations, including the fact that technical inaccuracies were common. Students identified incorrect musical examples, inconsistent notation, malfunctioning website features, and audio‑mixing problems. One student documented how, over time, the “quality of the site got worse,” illustrating AI’s tendency to forget earlier instructions in long interactions. This reinforced the need for rigorous verification when creating learning materials.

Another reflection noted that not all AI websites perform equally; some produce excellent results, while others generate distorted or incomplete outputs, forcing teachers to try multiple tools before finding one that works. It also reminded educators that even free or simple programs, like basic versions of Audacity, can still teach valuable mixing and editing skills without needing expensive software. A parallel concern was over‑reliance. Students worried that teachers might outsource too much planning to AI or that learners might depend on automated feedback rather than developing critical listening skills. As one reflection warned, “AI can and will become a key tool… the crucial factor is that we as real people know where the line is between a ‘tool’ and a ‘free worker.’”

Equity of access also arose as a barrier. Subscription‑based AI tools required credits or payment, creating challenges for students and highlighting ethical tensions between commercial technologies and educational use. Students demonstrated strong awareness of academic integrity. They distinguished between using AI to support structure and clarity versus allowing AI to generate entire lessons or presentations. One student cautioned that presenting AI‑produced content as one’s own is “blatant plagiarism,” highlighting the need for transparent and ethical practice.

Learning About Pedagogy and Professional Identity

Many students described developing a clearer sense of themselves as educators. They reflected on the complexity of communicating clearly, engaging learners, and designing accessible content. Some discovered gaps in their teaching confidence; others found new enthusiasm for pedagogical design. One wrote, “Teaching and clearly communicating my views was more challenging than I assumed,” acknowledging the shift from student to teacher mindset. Another recognised that while AI could support efficiency, it made them more aware of their responsibility for accuracy and learner experience.

Imagining the Future of AI in Music Education

Students were divided between optimism and caution. Some saw AI becoming a standard part of educational resource creation, enabling personalised practice, interactive learning, and rapid content generation. Others expressed concern about the possibility of AI replacing human instruction if not critically managed. However, all students agreed on one point: AI works best when treated as a supportive tool rather than an autonomous teacher. As one reflection summarised, “It is clear to me that AI is by no means a replacement for musical knowledge or teaching expertise.” Another added, “AI can make the process faster and more creative, but it still needs the human touch to sound right.”

Dr Hazel Farrell

Academic Lead for GenAI, Programme Leader BA (Hons) Music
South East Technological University

Dr Hazel Farrell is the SETU Academic Lead for Generative AI, and lead for the N-TUTORR National Gen AI Network project GenAI:N3, which aims to draw on expertise across the higher education sector to create a network and develop resources to support staff and students. She has presented her research on integrating AI into the classroom in a multitude of national and international forums focusing on topics such as Gen AI and student engagement, music education, assessment re-design, and UDL.

Keywords


The Transformative Power of Communities of Practice in AI Upskilling for Educators

By Bernie Goldbach, RUN EU SAP Lead
Estimated reading time: 5 minutes
A diverse group of five educators collaboratively studying a glowing, holographic network of digital lines and nodes on a table, symbolizing their shared learning and upskilling in Artificial Intelligence (AI) within a modern, book-lined academic setting. Image (and typos) generated by Nano Banana.
The power of collaboration: Communities of Practice are essential for educators to collectively navigate and integrate new AI technologies, transforming teaching and learning through shared knowledge and support. Image (and typos) generated by Nano Banana.

When the N-TUTORR programme ended in Ireland, I remained seated in the main Edtech25 auditorium to hear some of the final conversations by key players. They stood at a remarkable intersection of professional development and technological innovation. And some of them issued a call to action for continued conversation, perhaps engaging with generative AI tools within a Community of Practice (CoP).

Throughout my 40 year teaching career, I have walked pathways to genuine job satisfaction that extended far beyond simple skill acquisition. In my specific case, this satisfaction emerged from the synergy between collaborative learning, pedagogical innovation, and an excitement that the uncharted territory is unfolding alongside peers who share their commitment to educational excellence.

Finding Professional Fulfillment Through Shared Learning

The journey of upskilling in generative AI feels overwhelming when undertaken in isolation. I am still looking for a structured CoP for Generativism in Education. This would be a rich vein of collective discovery. At the moment, I have three colleagues who help me develop my skills with ethical and sustainable use of AI.

Ethan Mollick, whose research at the Wharton School has illuminated the practical applications of AI in educational contexts, consistently emphasises that the most effective learning about AI tools happens through shared experimentation and peer discussion. His work demonstrates that educators who engage collaboratively with AI technologies develop more sophisticated mental models of how these tools can enhance rather than replace pedagogical expertise. This collaborative approach alleviates the anxiety many educators feel about technological change, replacing it with curiosity and professional confidence.

Mairéad Pratschke, whose work emphasises the importance of collaborative professional learning, has highlighted how communities create safe spaces where educators can experiment, fail, and succeed together without judgment. This psychological safety becomes the foundation upon which genuine professional growth occurs.

Frances O’Donnell, whose insights at major conferences have become invaluable resources for educators navigating the AI landscape, directs the most effective AI workshops I have attended. O’Donnell’s hands-on training at conferences such as CESI (https://www.cesi.ie), EDULEARN (https://iceri.org), ILTA (https://ilta.ie), and Online Educa Berlin (https://oeb.global) have illuminated the engaging features of instructional design that emerge when educators thoughtfully integrate AI tools. Her instructional design frameworks demonstrate how AI can support the creation of personalised learning pathways, adaptive assessments, and multimodal content that engages diverse learners. O’Donnell’s emphasis on the human element in AI-assisted design resonates deeply with Communities of Practice

And thanks to Frances O’Donnell, I discovered the AI assistants inside H5P.

Elevating Instructional Design Through AI-Assisted Tools

The quality of instructional design, personified by clever educators, represents the most significant leap I have made when combining AI tools with collaborative professional learning. The commercial version of H5P (https://h5p.com) has revolutionised my workflow when creating interactive educational content. The smart import feature of H5P.com complements my teaching practice. I can quickly design rich, engaging learning experiences that would previously have required specialised technical skills or significant time investments. I have discovered ways to create everything from interactive videos with embedded questions to gamified quizzes and sophisticated branching scenarios.

I hope I find a CoP in Ireland that is interested in several of the H5P workflows I have adopted. For the moment, I’m revealing these remarkable capabilities while meeting people at education events in Belgium, Spain, Portugal, and the Netherlands. It feels like I’m a town crier who has a notebook full of shared templates. I want to offer links to the interactive content that I have created with H5P AI and gain feedback from interested colleagues. But more than the conversations at the conferences, I’m interested in making real connections with educators who want to actively participate in vibrant online communities where sustained professional learning continues.

Sustaining Innovation with Community

Job satisfaction among educators has always been closely tied to their sense of efficacy and their ability to make meaningful impacts on student learning. Communities of Practice focused on AI upskilling amplify this satisfaction by creating networks of mutual support where members celebrate innovations, troubleshoot challenges, and collectively develop best practices. When an educator discovers an effective way to use AI for differentiation or assessment design, sharing that discovery with colleagues who understand the pedagogical context creates a profound sense of professional contribution.

These communities also combat the professional tension that currently faces proficient AI users. Mollick’s observations about blowback against widespread AI adoption in education reveal a critical imperative to stand together with a network that validates the quality of teaching and provides constructive feedback. When sharing with a community, individual risk-taking morphs into collective innovation, making the professional development experience inherently more satisfying and sustainable.

We need the spark of N-TUTORR inside an AI-focused Community of Practice. We need to amplify voices. Together we need to become confident navigators of innovation. We need to co-create contextually appropriate pedagogical approaches that effectively leverage AI in education.


Keywords


This is not the end but a beginning: Responding to “Something Wicked This Way Comes”

By Kerith George-Briant and Jack Hogan, Abertay University Dundee
Estimated reading time: 5 minutes
A conceptual illustration showing a digital roadmap splitting into two distinct, glowing paths, one labeled "Secure Assessment" and the other "Open Assessment." The background blends subtle academic motifs with swirling binary code, symbolizing the strategic integration of Generative AI into higher education assessment practices. Image (and typos) generated by Nano Banana.
Navigating the future: The “Two-Lane Approach” to Generative AI in assessment—balancing secure testing of threshold concepts (Lane 1) with open collaboration for developing AI literacy and critical thinking (Lane 2). Image (and typos) generated by Nano Banana.

O’Mahony’s provocatively titled “Something Wicked This Way Comes” blog outlined feelings we recognised from across the sector, which were that Generative AI (GenAI) tools have created unease, disruption, and uncertainty. In addition, we felt that GenAI provided huge opportunities, and as higher education has led and celebrated innovation in all disciplines over centuries, how this translated into our assessment practices intrigued us. 

At Abertay University, we’ve been exploring the “wicked problem” of whether to change teaching practices through a small-scale research project entitled “Lane Change Ahead: Artificial Intelligence’s Impact on Assessment Practices.” Our findings agree with O’Mahony’s observations that while GenAI does pose a challenge to academic integrity and traditional assessment models, it also offers opportunities for innovation, equity, and deeper learning, but we must respond thoughtfully and acknowledge that there are a variety of views on GenAI.

Academic Sensemaking

To understand colleagues’ perspectives and experiences, we applied Degn’s (2016) concept of academic sensemaking to understand how the colleagues we interviewed felt about GenAI. Findings showed that some assessment designers are decoupling, designing assessments that use GenAI outputs without requiring students to engage with the tools. Others are defiant or defeatist, allowing limited collaboration with GenAI tools but awarding a low percentage of the grade to that output. And some are strategic and optimistic, embracing GenAI as a tool for learning, creativity, and employability.

The responses show the reasons for unease are not just pedagogical; they’re deeply personal. GenAI challenges academic identity. Recognising this emotional response is essential to supporting staff if change is needed.

Detection and the Blurred Line

And change is needed, we would argue. Back in 2023, Perkin et al’s analysis of Turnitin’s AI detection capabilities revealed that while 91% of fully AI-generated submissions were flagged, the average detection within each paper was only 54.8% and only half of those flagged papers would have been referred for academic misconduct. Similar studies since then have continued to show the same types of results. And if detection isn’t possible, setting an absurd line as referred to by Corbin et al is ever more incongruous. There is no reliable way to indicate whether a student has stopped at the point of using AI for brainstorming or has engaged critically with AI paraphrased output. Some may read this and think that it’s game over, however if we embrace these challenges and adapt our approaches, we find solutions that are fit for purpose.

From Fear to Framework: The Two-Lane Approach

So, what is the solution? Our research explored whether the two-lane approach developed by Liu and Bridgeman would work at Abertay, where:

  • Lane 1: Secure Assessments would be conducted under controlled conditions to assure learning of threshold concepts and
  • Lane 2: Open Assessments would allow unrestricted use of GenAI.

Our case studies revealed three distinct modes of GenAI integration:

  • AI Output Only – Students critiqued AI-generated content without using GenAI themselves. This aligned with Lane 1 and a secure assessment method focusing on threshold concepts.
  • Limited Collaboration – Students used GenAI for planning and a minimal piece of output within a larger piece of assessment, which did not allow GenAI use. Students developed some critical thinking, but weren’t able to apply this learning to the whole assessment.
  • Unlimited Collaboration – Students were fully engaged with GenAI, with reflection and justification built into the assessment. Assessment designers reporting that students produced higher quality work and demonstrated enhanced critical thinking.

Each mode reflected a different balance of trust, control, and pedagogical intent. Interestingly, the AI Output pieces were secure and used to build AI literacy while meeting PSRB requirements, which asked for certain competencies and skills to be tested. The limited collaboration had an element of open assessment, but the percentage of the grade awarded to the output was minimal, and an absurd line was created by asking for no AI use in the larger part of the assessment. Finally, the assessments with unlimited collaboration were designed because those colleagues believed that writing without GenAI was not authentic, and they believed that employers would expect AI literacy skills, perhaps not misplaced based on the figure given in O’Mahony’s blog.

Reframing the Narrative: GenAI as Opportunity

We see the need to treat GenAI as a partner in education, one that encourages critical reflection. This will require carefully scaffolded teaching activities to develop the AI literacy of students and avoid cognitive offloading. Thankfully, ways forward have begun to appear, as noted in the work of Gerlick and Jose et al.

Conclusion: From Wicked to ‘Witch’ lane?

As educators, we have a choice. We can resist, decouple from GenAI or we can choose to lead the narrative strategically and optimistically. Although the pathway forward may not be a yellow brick road, we believe it’s worth considering which lane may suit us best. The key is that we don’t do this in isolation, but we take a pragmatic approach across our entire degree programme considering the level of study and the appropriate AI literacy skills.

GenAI acknowledgement:
Microsoft Copilot (https://copilot.microsoft.com) – used to create a draft blog from our research paper.

Kerith George-Briant

Learner Development Manager
Abertay University

Kerith George-Briant manages the Learner Development Service at Abertay. Her key interests are in building best practices in using AI, inclusivity, and accessibility.

Jack Hogan

Lecturer in Academic Practice
Abertay University

Jack Hogan works within the Abertay Learning Enhancement (AbLE) Academy as a Lecturer in Academic Practice. His research interests include student transitions and the first-year experience, microcredentials, skills development and employability. 


Keywords