Explainable AI in education: Fostering human oversight and shared responsibility


Source

The European Digital Education Hub

Summary

This European Digital Education Hub report explores how explainable artificial intelligence (XAI) can support trustworthy, ethical, and effective AI use in education. XAI is positioned as central to ensuring transparency, fairness, accountability, and human oversight in educational AI systems. The document frames XAI within EU regulations (AI Act, GDPR, Digital Services Act, etc.), highlighting its role in protecting rights while fostering innovation. It stresses that explanations of AI decisions must be understandable, context-sensitive, and actionable for learners, educators, policy-makers, and developers alike.

The report emphasises both the technical and human dimensions of XAI, defining four key concepts: transparency, interpretability, explainability, and understandability. Practical applications include intelligent tutoring systems and AI-driven lesson planning, with case studies showing how different stakeholders perceive risks and benefits. A major theme is capacity-building: educators need new competences to critically assess AI, integrate it responsibly, and communicate its role to students. Ultimately, XAI is not only a technical safeguard but a pedagogical tool that fosters agency, metacognition, and trust.

Key Points

  • XAI enables trust in AI by making systems transparent, interpretable, explainable, and understandable.
  • EU frameworks (AI Act, GDPR) require AI systems in education to meet legal standards of fairness, accountability, and transparency.
  • Education use cases include intelligent tutoring systems and lesson-plan generators, where human oversight remains critical.
  • Stakeholders (educators, learners, developers, policymakers) require tailored explanations at different levels of depth.
  • Teachers need competences in AI literacy, critical thinking, and the ethical use of XAI tools.
  • Explanations should align with pedagogical goals, fostering self-regulated learning and student agency.
  • Risks include bias, opacity of data-driven models, and threats to academic integrity if explanations are weak.
  • Opportunities lie in supporting inclusivity, accessibility, and personalised learning.
  • Collaboration between developers, educators, and authorities is essential to balance innovation with safeguards.
  • XAI in education is about shared responsibility—designing systems where humans remain accountable and learners remain empowered.

Conclusion

The report concludes that explainable AI is a cornerstone for trustworthy AI in education. It bridges technical transparency with human understanding, ensuring compliance with EU laws while empowering educators and learners. By embedding explainability into both AI design and classroom practice, education systems can harness AI’s benefits responsibly, maintaining fairness, accountability, and human agency.

Keywords

URL

https://knowledgeinnovation.eu/kic-publication/explainable-ai-in-education-fostering-human-oversight-and-shared-responsibility/

Summary generated by ChatGPT 5


2025 Horizon Report: Teaching and Learning Edition


Source

EDUCAUSE

Summary

The 2025 Horizon Report highlights generative AI (GenAI) as one of the most disruptive forces shaping higher education teaching and learning. It frames GenAI not merely as a technological trend but as a catalyst for rethinking pedagogy, assessment, ethics, and institutional strategy. GenAI tools are now widely available, reshaping how students learn, produce work, and engage with knowledge. The report emphasises both opportunities—personalisation, creativity, and efficiency—and risks, including misinformation, bias, overreliance, and threats to academic integrity.

Institutions are urged to move beyond reactive bans or detection measures and instead adopt values-led, strategic approaches to GenAI integration. This involves embedding AI literacy across curricula, supporting staff development, and redesigning assessments to focus on authentic, process-based demonstrations of learning. Ethical considerations are central: ensuring equity of access, safeguarding privacy, addressing sustainability, and clarifying boundaries of responsible use. GenAI is framed as a general-purpose technology—akin to the internet or electricity—that will transform higher education in profound and ongoing ways.

Key Points

  • GenAI is a general-purpose technology reshaping teaching and learning.
  • Opportunities include personalised learning, enhanced creativity, and staff efficiency.
  • Risks involve misinformation, bias, overreliance, and compromised academic integrity.
  • Detection tools are unreliable; focus should shift to assessment redesign.
  • AI literacy is essential for both staff and students across disciplines.
  • Equity and access must be prioritised to avoid deepening divides.
  • Ethical frameworks should guide responsible, transparent use of GenAI.
  • Sustainability concerns highlight the energy and resource costs of AI.
  • Institutional strategy must integrate GenAI into digital transformation plans.
  • Faculty development and sector-wide collaboration are critical for adaptation.

Conclusion

The report concludes that generative AI is no passing trend but a structural shift in higher education. Its potential to augment teaching and learning is significant, but only if institutions adopt proactive, ethical, and pedagogically grounded approaches. Success lies not in resisting GenAI, but in reimagining educational practices so that students and staff can use it critically, creatively, and responsibly.

Keywords

URL

https://library.educause.edu/resources/2025/5/2025-educause-horizon-report-teaching-and-learning-edition

Summary generated by ChatGPT 5