Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators


Source

European Commission: Directorate-General for Education, Youth, Sport and Culture, Guidelines on the ethical use of artificial intelligence and data in teaching and learning for educators, Publications Office of the European Union, 2026, https://data.europa.eu/doi/10.2766/7967834

Summary

These European Commission guidelines provide practical and ethical direction for educators using artificial intelligence (AI) and data-driven technologies in teaching and learning. Aimed primarily at school education but broadly applicable across educational contexts, the document emphasises that AI should enhance human-centred, inclusive, and equitable education. It introduces a structured framework to help educators critically assess AI tools, ensuring their use aligns with pedagogical goals, respects learners’ rights, and supports professional autonomy.

The guidelines are grounded in key ethical principles, including human agency, transparency, fairness, privacy, and accountability. They highlight the importance of developing AI literacy among educators and learners, enabling them to understand how AI systems function, what data they use, and what limitations they carry. A strong emphasis is placed on critical engagement—educators are encouraged to question AI outputs, address bias, and avoid overreliance on automated systems. The document also provides a practical self-reflection tool to support educators in evaluating AI tools across dimensions such as reliability, safety, inclusiveness, and educational value.

Key Points

  • AI should support human-centred, inclusive teaching and learning.
  • Educators retain responsibility for decisions made using AI tools.
  • Transparency and explainability are essential for trust in AI systems.
  • AI literacy is critical for both teachers and learners.
  • Data protection and privacy must comply with GDPR principles.
  • Bias and fairness must be actively monitored and mitigated.
  • Educators should critically evaluate AI outputs and limitations.
  • AI tools should align with pedagogical goals, not drive them.
  • A self-reflection framework supports responsible AI adoption.
  • Ethical use of AI requires ongoing professional development and awareness.

Conclusion

The guidelines position AI as a valuable but carefully bounded tool in education. By embedding ethical reflection, critical engagement, and human oversight into everyday practice, educators can harness AI’s benefits while protecting learner rights, educational integrity, and professional judgement.

Keywords

URL

https://op.europa.eu/en/publication-detail/-/publication/f692aa0b-17a7-11f1-8870-01aa75ed71a1

Summary generated by ChatGPT 5.3


Australian Framework for Artificial Intelligence in Higher Education


Source

Lodge, J. M., Bower, M., Gulson, K., Henderson, M., Slade, C., & Southgate, E. (2025). Australian Centre for Student Equity and Success, Curtin University

Summary

This framework provides a national roadmap for the ethical, equitable, and effective use of artificial intelligence (AI)—including generative and agentic AI—across Australian higher education. It recognises both the transformative potential and inherent risks of AI, calling for governance structures, policies, and pedagogies that prioritise human flourishing, academic integrity, and cultural inclusion. The framework builds on the Australian Framework for Generative AI in Schools but is tailored to the unique demands of higher education: research integrity, advanced scholarship, and professional formation in AI-enhanced contexts.

Centred around seven guiding principles—human-centred education, inclusive implementation, ethical decision-making, Indigenous knowledges, ethical development, adaptive skills, and evidence-informed innovation—the framework links directly to the Higher Education Standards Framework (Threshold Standards) and the UN Sustainable Development Goals. It emphasises AI literacy, Indigenous data sovereignty, environmental sustainability, and the co-design of equitable AI systems. Implementation guidance includes governance structures, staff training, assessment redesign, cross-institutional collaboration, and a coordinated national research agenda.

Key Points

  • AI in higher education must remain human-centred and ethically governed.
  • Generative and agentic AI should support, not replace, human teaching and scholarship.
  • Institutional AI frameworks must align with equity, inclusion, and sustainability goals.
  • Indigenous knowledge systems and data sovereignty are integral to AI ethics.
  • AI policies should be co-designed with students, staff, and First Nations leaders.
  • Governance requires transparency, fairness, accountability, and contestability.
  • Staff professional learning should address ethical, cultural, and environmental dimensions.
  • Pedagogical design must cultivate adaptive, critical, and reflective learning skills.
  • Sector-wide collaboration and shared national resources are key to sustainability.
  • Continuous evaluation ensures AI enhances educational quality and social good.

Conclusion

The framework positions Australia’s higher education sector to lead in responsible AI adoption. By embedding ethical, equitable, and evidence-based practices, it ensures that AI integration strengthens—not undermines—human expertise, cultural integrity, and educational purpose. It reaffirms universities as stewards of both knowledge and justice in an AI-shaped future.

Keywords

URL

https://www.acses.edu.au/publication/australian-framework-for-artificial-intelligence-in-higher-education/

Summary generated by ChatGPT 5.1


‘We Could Have Asked ChatGPT’: Students Fight Back Over Course Taught by AI


A digital illustration of a group of diverse students standing in a classroom, looking frustrated and pointing towards an empty podium where a holographic projection of a generic AI avatar is visible. The text "WE COULD HAVE ASKED CHATGPT" is superimposed above the students. Image (and typos) generated by Nano Banana.
The revolt against automation: Capturing the frustration of students pushing back against educational institutions that rely on AI to replace human instructors. Image (and typos) generated by Nano Banana.

Source

The Guardian

Summary

Students on a coding apprenticeship at the University of Staffordshire say they were “robbed of knowledge” after discovering that large portions of their course materials—including slides, assignments and even voiceovers—were generated by AI. Despite university policies restricting students’ use of AI, staff appeared to rely heavily on AI-generated teaching content, leading to accusations of hypocrisy and declining trust in the programme. Students reported inconsistent editing, generic content and bizarre glitches such as a mid-video switch to a Spanish accent. Complaints brought little change, and although human lecturers delivered the final session, students argue the damage to their learning and career prospects has already been done. The case highlights rising tensions as universities increasingly adopt AI tools without transparent standards or safeguards.

Key Points

  • Staffordshire students discovered widespread use of AI-generated slides, tasks and videos.
  • AI usage contradicted strict policies prohibiting students from submitting AI-generated work.
  • Students reported generic content, inconsistent editing and AI voiceover glitches.
  • Repeated complaints yielded limited response; a human lecturer was added only at the end.
  • Students fear lost learning, reduced programme credibility and wasted time.

Keywords

URL

https://www.theguardian.com/education/2025/nov/20/university-of-staffordshire-course-taught-in-large-part-by-ai-artificial-intelligence

Summary generated by ChatGPT 5


Outsourced Thinking? Experts Consider AI’s Impact on Our Brains


A stylized, conceptual image showing a human head in profile with glowing digital lines extending from the brain area towards a floating, interconnected mesh of AI circuitry, symbolizing the outsourcing of thought processes. A question mark hangs over the point of connection. Image (and typos) generated by Nano Banana.
The cognitive shift: Experts are weighing the potential impact of AI reliance—is it a tool for enhancement, or are we outsourcing the very processes that keep our brains sharp? Image (and typos) generated by Nano Banana.

Source

RTÉ Prime Time

Summary

RTÉ explores emerging concerns about how widespread AI use may alter human cognition. With almost 800 million ChatGPT users globally and Ireland among the world’s heaviest users, scientists warn that convenience may carry hidden cognitive costs. An MIT study using brain-imaging found reduced neural activity when participants relied on ChatGPT, suggesting diminished critical evaluation. Irish neuroscientist Paul Dockree cautions that outsourcing tasks like writing and problem-solving could erode core cognitive skills, similar to over-dependency on GPS. Others draw parallels with aviation, where automation has weakened pilots’ manual skills. While some users praise AI’s benefits, experts warn of a potential “two-tier society” of empowered critical thinkers and those who grow dependent on automated reasoning.

Key Points

  • AI adoption is extremely rapid; Ireland has one of the highest global usage rates.
  • MIT research indicates reduced brain activity when using ChatGPT for problem-solving.
  • Cognitive scientists warn of long-term skill decline if AI replaces active thinking.
  • Automation parallels in aviation show how skills can erode without practice.
  • Public reactions are mixed, reflecting broader uncertainty about AI’s cognitive impact.

Keywords

URL

https://www.rte.ie/news/primetime/2025/1111/1543356-outsourced-thinking-experts-consider-ais-impact-on-our-brains/

Summary generated by ChatGPT 5


AI Could Revolutionise Higher Education in a Way We Did Not Expect

by Brian Mulligan – e-learning consultant with Universal Learning Systems (ulsystems.com)
Estimated reading time: 5 minutes
grand, expansive, and ornate university library or academic hall with high ceilings and classical architecture. In the center, a towering, swirling helix of glowing blue digital data, code, books, and educational icons rises dramatically, representing the transformative power of AI. Around the hall, students are seated at tables with glowing laptops, and many more students are walking and interacting. Holographic projections of famous busts and academic figures are subtly integrated into the scene. The entire environment is infused with a futuristic, digital glow. Image (and typos) generated by Nano Banana.
Artificial intelligence is poised to unleash a revolution in higher education, not in the ways we’ve conventionally imagined, but through unexpected and profound transformations. This image visualises AI as a central, dynamic force reshaping academic landscapes, curriculum delivery, and the very nature of learning in universities. Image (and typos) generated by Nano Banana.

The current conversation about Artificial Intelligence (AI) in higher education primarily focuses on efficiency and impact. People talk about how AI can personalise learning, streamline administrative tasks, and help colleges “do more with less.” For decades, every new technology, from online training to MOOCs, promised a similar transformation. Generative AI certainly offers powerful tools to enhance existing processes.

However, perhaps the revolutionary potential of AI in higher education may come from a more critical and urgent pressure: its significant challenge to the integrity of academic credentials and the learning processes they are supposed to represent.

Historically, colleges haven’t had a strong incentive to completely overhaul their teaching models just because new technology arrived. Traditional lectures, established assessment methods, and the value of a physical campus have remained largely entrenched. Technology usually just served to augment existing practices, not to transform the underlying structures of teaching, learning, and accreditation.

AI, however, may be a different kind of catalyst for change.

The Integrity Challenge

AI’s ability to create human-quality text, solve complex problems, and produce creative outputs has presented a serious challenge to academic integrity. Reports show a significant rise in AI-driven cheating, with many students now routinely using these tools to complete their coursework. For a growing number of students, offloading cognitive labour, from summarising readings to generating entire essays, to AI is becoming the new norm.

This widespread and mostly undetectable cheating compromises the entire purpose of assessment: to verify genuine learning and award credible qualifications. Even students committed to authentic learning feel compromised, forced to compete against peers using AI for an unfair advantage.

Crucially, even when AI use is approved, there’s a legitimate concern that it can undermine the learning process itself. If students rely on AI for foundational tasks like summarisation and idea generation, they may bypass the essential cognitive engagement and critical thinking development. This reliance can lead to intellectual laziness, meaning the credentials universities bestow may no longer reliably signify genuine knowledge and skills. This creates an urgent imperative for institutions to act.

The Shift to Authentic Learning

While many believe we can address this just by redesigning assignments, the challenge offers, and may even require, a structural shift towards more radical educational models. These new approaches,which have been emerging to address the challenges of quality, access and cost, may also prove to be the most effective ways of addressing academic integrity challenges.

To illustrate the point, let’s look at three examples of such emerging models:

  1. Flipped Learning: Students engage with core content independently online. Valuable in-person time is then dedicated to active learning like problem-solving, discussions, and collaborative projects. Educators can directly observe the application of knowledge, allowing for a more authentic assessment of understanding.
  2. Project-Based Learning (PBL): Often seen as an integrated flipped model, PBL immerses students in complex, integrated projects over extended periods. The focus is on applying knowledge from multiple modules and independent research to solve real-world problems. These projects demand sustained, supervised engagement, creative synthesis, and complex problem-solving, capabilities that are very hard to simply outsource to AI.
  3. Work-Based Learning (WBL): A significant part of the student’s journey takes place in authentic workplace settings. The emphasis shifts entirely to the demonstrable application of skills and knowledge in genuine professional contexts, a feat AI alone cannot achieve. Assessment moves to evaluating how a student performs and reflects in their role, including how they effectively and ethically integrate AI tools professionally.

AI as the Enabler of Change

Shifting to these models isn’t easy. Can institutions afford the resources to develop rich content, intricate project designs, and robust supervisory frameworks? Creating and assessing numerous, varied, and authentic tasks requires significant time and financial investment.

This is where technology, now including AI itself, becomes the key enabler for the feasibility of these new pedagogical approaches. Learning technologies, intelligently deployed, can help by:

  • Affordably Creating Content: AI tools rapidly develop diverse learning materials, including texts, videos and formative quizzes as well as more sophisticated assessment designs.
  • Providing Automated Learning Support: AI-powered tutors and chatbots offer 24/7 support, guiding students through challenging material, which personalises the learning journey.
  • Monitoring Independent Work: Learning analytics, enhanced by AI, track student engagement and flag struggling individuals. This allows educators to provide timely, targeted human intervention.
  • Easing the Assessment Burden: Technology can streamline the heavy workload associated with more varied assignments. Simple digital tools like structured rubrics and templated feedback systems free up educator time for nuanced, human guidance.

In summary, the most significant impact of AI isn’t the familiar promise of doing things better or faster. By undermining traditional methods of learning verification through the ease of academic dishonesty, AI has created an unavoidable pressure for systemic change. It forces colleges to reconsider what they are assessing and what value their degrees truly represent.

It’s that AI, by challenging the old system so thoroughly, makes the redesign of higher education a critical necessity.

Brian Mulligan

E-learning Consultant
Universal Learning Systems (ulsystems.com)

Brian Mulligan is an e-learning consultant with Universal Learning Systems (ulsystems.com) having retired as Head of Online Learning Innovation at Atlantic Technological University in Sligo in 2022. His current interests include innovative models of higher education and the strategic use of learning technologies in higher education.


Keywords