QQI Generative Artificial Intelligence Survey Report 2025


Source

Quality and Qualifications Ireland (QQI), August 2025

Summary

This national survey captures the views of 1,229 staff and 1,005 learners across Ireland’s further, higher, and English language education sectors on their knowledge, use, and perceptions of generative AI (GenAI). The report reveals growing engagement with GenAI but also wide disparities in understanding, policy, and preparedness. Most respondents recognise AI’s transformative impact but remain uncertain about its role in assessment, academic integrity, and employability.

While over 80% of staff and learners believe GenAI will significantly change education and work over the next five years, few feel equipped to respond. Only 20% of staff and 14% of learners report access to GenAI training. Policies are inconsistent or absent, with most institutions leaving decisions on use to individual educators. Both staff and learners support transparent, declared use of GenAI but express concerns about bias, overreliance, loss of essential skills, and declining trust in qualifications. Respondents call for coherent national and institutional policies, professional development, and curriculum reform that balances innovation with integrity.

Key Points

  • 82% of respondents expect GenAI to transform learning and work within five years.
  • 63% of staff and 36% of learners believe GenAI literacy should be explicitly taught.
  • Fewer than one in five institutions currently provide structured GenAI training.
  • Policies on GenAI use are inconsistent, unclear, or absent in most institutions.
  • Over half of respondents fear skill erosion and reduced academic trust from AI use.
  • 70% of staff say assessment rules for GenAI lack clarity or consistency.
  • 83% of learners believe GenAI will change how they are assessed.
  • Staff and learners call for transparent declaration of GenAI use in assignments.
  • 61% of staff feel learners are unprepared to use GenAI responsibly in the workplace.
  • Respondents emphasise ethical governance, inclusion, and sustainable AI adoption.

Conclusion

The survey highlights a critical moment for Irish education: generative AI is already influencing learning and work, yet systems for policy, training, and ethics are lagging behind. To maintain public trust and educational relevance, QQI recommends a coordinated national response centred on transparency, AI literacy, and values-led governance that equips both learners and educators for an AI-driven future.

Keywords

URL

https://www.qqi.ie/sites/default/files/2025-08/generative-artificial-intelligence-survey-report-2025.pdf

Summary generated by ChatGPT 5


Explainable AI in education: Fostering human oversight and shared responsibility


Source

The European Digital Education Hub

Summary

This European Digital Education Hub report explores how explainable artificial intelligence (XAI) can support trustworthy, ethical, and effective AI use in education. XAI is positioned as central to ensuring transparency, fairness, accountability, and human oversight in educational AI systems. The document frames XAI within EU regulations (AI Act, GDPR, Digital Services Act, etc.), highlighting its role in protecting rights while fostering innovation. It stresses that explanations of AI decisions must be understandable, context-sensitive, and actionable for learners, educators, policy-makers, and developers alike.

The report emphasises both the technical and human dimensions of XAI, defining four key concepts: transparency, interpretability, explainability, and understandability. Practical applications include intelligent tutoring systems and AI-driven lesson planning, with case studies showing how different stakeholders perceive risks and benefits. A major theme is capacity-building: educators need new competences to critically assess AI, integrate it responsibly, and communicate its role to students. Ultimately, XAI is not only a technical safeguard but a pedagogical tool that fosters agency, metacognition, and trust.

Key Points

  • XAI enables trust in AI by making systems transparent, interpretable, explainable, and understandable.
  • EU frameworks (AI Act, GDPR) require AI systems in education to meet legal standards of fairness, accountability, and transparency.
  • Education use cases include intelligent tutoring systems and lesson-plan generators, where human oversight remains critical.
  • Stakeholders (educators, learners, developers, policymakers) require tailored explanations at different levels of depth.
  • Teachers need competences in AI literacy, critical thinking, and the ethical use of XAI tools.
  • Explanations should align with pedagogical goals, fostering self-regulated learning and student agency.
  • Risks include bias, opacity of data-driven models, and threats to academic integrity if explanations are weak.
  • Opportunities lie in supporting inclusivity, accessibility, and personalised learning.
  • Collaboration between developers, educators, and authorities is essential to balance innovation with safeguards.
  • XAI in education is about shared responsibility—designing systems where humans remain accountable and learners remain empowered.

Conclusion

The report concludes that explainable AI is a cornerstone for trustworthy AI in education. It bridges technical transparency with human understanding, ensuring compliance with EU laws while empowering educators and learners. By embedding explainability into both AI design and classroom practice, education systems can harness AI’s benefits responsibly, maintaining fairness, accountability, and human agency.

Keywords

URL

https://knowledgeinnovation.eu/kic-publication/explainable-ai-in-education-fostering-human-oversight-and-shared-responsibility/

Summary generated by ChatGPT 5


2025 Horizon Report: Teaching and Learning Edition


Source

EDUCAUSE

Summary

The 2025 Horizon Report highlights generative AI (GenAI) as one of the most disruptive forces shaping higher education teaching and learning. It frames GenAI not merely as a technological trend but as a catalyst for rethinking pedagogy, assessment, ethics, and institutional strategy. GenAI tools are now widely available, reshaping how students learn, produce work, and engage with knowledge. The report emphasises both opportunities—personalisation, creativity, and efficiency—and risks, including misinformation, bias, overreliance, and threats to academic integrity.

Institutions are urged to move beyond reactive bans or detection measures and instead adopt values-led, strategic approaches to GenAI integration. This involves embedding AI literacy across curricula, supporting staff development, and redesigning assessments to focus on authentic, process-based demonstrations of learning. Ethical considerations are central: ensuring equity of access, safeguarding privacy, addressing sustainability, and clarifying boundaries of responsible use. GenAI is framed as a general-purpose technology—akin to the internet or electricity—that will transform higher education in profound and ongoing ways.

Key Points

  • GenAI is a general-purpose technology reshaping teaching and learning.
  • Opportunities include personalised learning, enhanced creativity, and staff efficiency.
  • Risks involve misinformation, bias, overreliance, and compromised academic integrity.
  • Detection tools are unreliable; focus should shift to assessment redesign.
  • AI literacy is essential for both staff and students across disciplines.
  • Equity and access must be prioritised to avoid deepening divides.
  • Ethical frameworks should guide responsible, transparent use of GenAI.
  • Sustainability concerns highlight the energy and resource costs of AI.
  • Institutional strategy must integrate GenAI into digital transformation plans.
  • Faculty development and sector-wide collaboration are critical for adaptation.

Conclusion

The report concludes that generative AI is no passing trend but a structural shift in higher education. Its potential to augment teaching and learning is significant, but only if institutions adopt proactive, ethical, and pedagogically grounded approaches. Success lies not in resisting GenAI, but in reimagining educational practices so that students and staff can use it critically, creatively, and responsibly.

Keywords

URL

https://library.educause.edu/resources/2025/5/2025-educause-horizon-report-teaching-and-learning-edition

Summary generated by ChatGPT 5


New Horizons for Higher Education: Teaching and Learning with Generative AI


Source

N-TUTORR National Digital Leadership Network (NDLN) – Professor Mairéad Pratschke

Summary

This report examines how generative AI (GAI) is transforming higher education, presenting both opportunities and risks. It highlights three main areas: the impact of GAI on current teaching, assessment, and learner-centred practice; the development of emerging AI pedagogy, international best practice, and early research findings; and the broader context of digital transformation, regulation, and future skills. The analysis stresses that while GAI can enhance accessibility, personalisation, and engagement, it also raises critical concerns around academic integrity, bias, equity, and sustainability.

The report positions GAI as a general-purpose technology akin to the internet or electricity, reshaping the nature of knowledge and collaboration in higher education. It calls for institutional leaders to align AI adoption with sectoral values such as inclusion, integrity, and social responsibility, while also addressing infrastructure gaps, staff training, and regulatory compliance. To be effective, GAI use must be pedagogically aligned, ethically grounded, and strategically supported. The future success of higher education depends on preparing students not just to use AI, but to work with it critically, creatively, and responsibly.

Key Points

  • GAI challenges academic integrity but also enables personalised learning at scale.
  • Pedagogical alignment is essential: AI must support, not replace, learning processes.
  • Early research warns of overreliance and “cognitive offloading” without human oversight.
  • AI can widen inequities unless digital equity and inclusion are prioritised.
  • Institutional strategy must balance efficiency with effectiveness in learning design.
  • National and EU regulation (e.g., AI Act) set high standards for responsible AI use.
  • Frontier AI models offer powerful capabilities but raise issues of bias and safety.
  • Educators increasingly take on roles as AI tool designers and facilitators.
  • Collaboration with industry is crucial for future career alignment and skills.
  • Sustained investment in infrastructure, training, and AI literacy is required.

Conclusion

Generative AI represents a transformative force in higher education. Its integration offers significant potential to augment human learning and expand access, but only if guided by values-led leadership, pedagogical rigour, and robust governance. Institutions must act strategically, embedding AI literacy and ethical practice to ensure that this “new horizon” supports both student success and the future sustainability of higher education.

Keywords

URL

https://www.ndln.ie/teaching-and-learning-with-generative-ai

Summary generated by ChatGPT 5