AI Detectors in Education


Source

Associate Professor Mark A. Bassett

Summary

This report critically examines the use of AI text detectors in higher education, questioning their accuracy, fairness, and ethical implications. While institutions often adopt detectors as a visible response to concerns about generative AI in student work, the paper highlights that their statistical metrics (e.g., false positive/negative rates) are largely meaningless in real-world educational contexts. Human- and AI-written text cannot be reliably distinguished, making detector outputs unreliable as evidence. Moreover, reliance on detectors risks reinforcing inequities: students with access to premium AI tools are less likely to be flagged, while others face disproportionate scrutiny.

Bassett argues that AI detectors compromise fairness and transparency in academic integrity processes. Comparisons to metal detectors, smoke alarms, or door locks are dismissed as misleading, since those tools measure objective, physical phenomena with regulated standards, unlike the probabilistic guesswork of AI detectors. The report stresses that detector outputs shift the burden of proof unfairly onto students, often pressuring them into confessions or penalising them based on arbitrary markers like writing style or speed. Instead of doubling down on flawed tools, the focus should be on redesigning assessments, clarifying expectations, and upholding procedural fairness.

Key Points

  • AI detectors appear effective but offer no reliable standard of evidence.
  • Accuracy metrics (TPR, FPR, etc.) are meaningless in practice outside controlled tests.
  • Detectors unfairly target students without addressing systemic integrity issues.
  • Reliance risks inequity: affluent or tech-savvy students can evade detection more easily.
  • Using multiple detectors or comparing student work to AI outputs reinforces bias, not evidence.
  • Analogies to locks, smoke alarms, or metal detectors are misleading and invalid.
  • Procedural fairness demands that institutions—not students—carry the burden of proof.
  • False positives have serious consequences for students, unlike benign fire alarm errors.
  • Deterrence through fear undermines trust and shifts education toward surveillance.
  • Real solutions lie in redesigning assessment practices, not deploying flawed detection tools.

Conclusion

AI detectors are unreliable, unregulated, and ethically problematic as tools for ensuring academic integrity. Rather than treating detector outputs as evidence, institutions should prioritise fairness, transparency, and assessment redesign. Ensuring that students learn and are evaluated equitably requires moving beyond technological quick fixes toward principled, values-based approaches.

Keywords

URL

https://drmarkbassett.com/assets/AI_Detectors_in_education.pdf

Summary generated by ChatGPT 5


QQI Generative Artificial Intelligence Survey Report 2025


Source

Quality and Qualifications Ireland (QQI), August 2025

Summary

This national survey captures the views of 1,229 staff and 1,005 learners across Ireland’s further, higher, and English language education sectors on their knowledge, use, and perceptions of generative AI (GenAI). The report reveals growing engagement with GenAI but also wide disparities in understanding, policy, and preparedness. Most respondents recognise AI’s transformative impact but remain uncertain about its role in assessment, academic integrity, and employability.

While over 80% of staff and learners believe GenAI will significantly change education and work over the next five years, few feel equipped to respond. Only 20% of staff and 14% of learners report access to GenAI training. Policies are inconsistent or absent, with most institutions leaving decisions on use to individual educators. Both staff and learners support transparent, declared use of GenAI but express concerns about bias, overreliance, loss of essential skills, and declining trust in qualifications. Respondents call for coherent national and institutional policies, professional development, and curriculum reform that balances innovation with integrity.

Key Points

  • 82% of respondents expect GenAI to transform learning and work within five years.
  • 63% of staff and 36% of learners believe GenAI literacy should be explicitly taught.
  • Fewer than one in five institutions currently provide structured GenAI training.
  • Policies on GenAI use are inconsistent, unclear, or absent in most institutions.
  • Over half of respondents fear skill erosion and reduced academic trust from AI use.
  • 70% of staff say assessment rules for GenAI lack clarity or consistency.
  • 83% of learners believe GenAI will change how they are assessed.
  • Staff and learners call for transparent declaration of GenAI use in assignments.
  • 61% of staff feel learners are unprepared to use GenAI responsibly in the workplace.
  • Respondents emphasise ethical governance, inclusion, and sustainable AI adoption.

Conclusion

The survey highlights a critical moment for Irish education: generative AI is already influencing learning and work, yet systems for policy, training, and ethics are lagging behind. To maintain public trust and educational relevance, QQI recommends a coordinated national response centred on transparency, AI literacy, and values-led governance that equips both learners and educators for an AI-driven future.

Keywords

URL

https://www.qqi.ie/sites/default/files/2025-08/generative-artificial-intelligence-survey-report-2025.pdf

Summary generated by ChatGPT 5


Understanding the Impacts of Generative AI Use on Children


Source

Alan Turing Institute

Summary

This report, prepared by the Alan Turing Institute with support from the LEGO Group, explores the impacts of generative AI on children aged 8–12 in the UK, alongside the views of their parents, carers, and teachers. Two large surveys were conducted: one with 780 children and their parents/carers, and another with 1,001 teachers across primary and secondary schools. The study examined how children encounter and use generative AI, how parents and teachers perceive its risks and benefits, and what this means for children’s wellbeing, learning, and creativity.

Findings show that while household use of generative AI is widespread (55%), access and awareness are uneven, being higher among wealthier families and private schools, and lower in state schools and disadvantaged groups. About 22% of children reported using generative AI, most commonly ChatGPT, for activities ranging from creating pictures to homework help. Children with additional learning needs were more likely to use AI for communication and companionship. Both children and parents who used AI themselves tended to view it positively, though parents voiced concerns about inaccuracy, inappropriate content, and reduced critical thinking. Teachers were frequent adopters—two-thirds used generative AI for lesson planning and research—and generally optimistic about its benefits for their work. However, many were uneasy about student use, particularly around academic integrity and diminished originality in schoolwork.

Key Points

  • 55% of UK households surveyed report generative AI use, with access shaped by income, region, and school type.
  • 22% of children (aged 8–12) have used generative AI; usage rises with age and is far higher in private schools.
  • ChatGPT is the most popular tool (58%), followed by Gemini and Snapchat’s “My AI.”
  • Children mainly use AI for creativity, learning, entertainment, and homework; those with additional needs use it more for communication and support.
  • 68% of child users find AI exciting; their enthusiasm strongly correlates with parents’ positive attitudes.
  • Parents are broadly optimistic (76%) but remain concerned about exposure to inappropriate or inaccurate information.
  • Teachers’ adoption is high (66%), especially for lesson planning and resource design, but often relies on personal licences.
  • Most teachers (85%) report increased productivity and confidence, though trust in AI outputs is more cautious.
  • Teachers are worried about students over-relying on AI: 57% report awareness of pupils submitting AI-generated work as their own.
  • Optimism is higher for AI as a support tool for special educational needs than for general student creativity or engagement.

Conclusion

Generative AI is already part of children’s digital lives, but access, understanding, and experiences vary widely. It sparks excitement and creativity yet raises concerns about equity, critical thinking, and integrity in education. While teachers see strong benefits for their own work, they remain divided on its value for students. The findings underline the need for clear policies, responsible design, and adult guidance to ensure AI enhances rather than undermines children’s learning and wellbeing.

Keywords

URL

https://www.turing.ac.uk/sites/default/files/2025-06/understanding_the_impacts_of_generative_ai_use_on_children_-_wp1_report.pdf

Summary generated by ChatGPT 5


Explainable AI in education: Fostering human oversight and shared responsibility


Source

The European Digital Education Hub

Summary

This European Digital Education Hub report explores how explainable artificial intelligence (XAI) can support trustworthy, ethical, and effective AI use in education. XAI is positioned as central to ensuring transparency, fairness, accountability, and human oversight in educational AI systems. The document frames XAI within EU regulations (AI Act, GDPR, Digital Services Act, etc.), highlighting its role in protecting rights while fostering innovation. It stresses that explanations of AI decisions must be understandable, context-sensitive, and actionable for learners, educators, policy-makers, and developers alike.

The report emphasises both the technical and human dimensions of XAI, defining four key concepts: transparency, interpretability, explainability, and understandability. Practical applications include intelligent tutoring systems and AI-driven lesson planning, with case studies showing how different stakeholders perceive risks and benefits. A major theme is capacity-building: educators need new competences to critically assess AI, integrate it responsibly, and communicate its role to students. Ultimately, XAI is not only a technical safeguard but a pedagogical tool that fosters agency, metacognition, and trust.

Key Points

  • XAI enables trust in AI by making systems transparent, interpretable, explainable, and understandable.
  • EU frameworks (AI Act, GDPR) require AI systems in education to meet legal standards of fairness, accountability, and transparency.
  • Education use cases include intelligent tutoring systems and lesson-plan generators, where human oversight remains critical.
  • Stakeholders (educators, learners, developers, policymakers) require tailored explanations at different levels of depth.
  • Teachers need competences in AI literacy, critical thinking, and the ethical use of XAI tools.
  • Explanations should align with pedagogical goals, fostering self-regulated learning and student agency.
  • Risks include bias, opacity of data-driven models, and threats to academic integrity if explanations are weak.
  • Opportunities lie in supporting inclusivity, accessibility, and personalised learning.
  • Collaboration between developers, educators, and authorities is essential to balance innovation with safeguards.
  • XAI in education is about shared responsibility—designing systems where humans remain accountable and learners remain empowered.

Conclusion

The report concludes that explainable AI is a cornerstone for trustworthy AI in education. It bridges technical transparency with human understanding, ensuring compliance with EU laws while empowering educators and learners. By embedding explainability into both AI design and classroom practice, education systems can harness AI’s benefits responsibly, maintaining fairness, accountability, and human agency.

Keywords

URL

https://knowledgeinnovation.eu/kic-publication/explainable-ai-in-education-fostering-human-oversight-and-shared-responsibility/

Summary generated by ChatGPT 5


2025 Horizon Report: Teaching and Learning Edition


Source

EDUCAUSE

Summary

The 2025 Horizon Report highlights generative AI (GenAI) as one of the most disruptive forces shaping higher education teaching and learning. It frames GenAI not merely as a technological trend but as a catalyst for rethinking pedagogy, assessment, ethics, and institutional strategy. GenAI tools are now widely available, reshaping how students learn, produce work, and engage with knowledge. The report emphasises both opportunities—personalisation, creativity, and efficiency—and risks, including misinformation, bias, overreliance, and threats to academic integrity.

Institutions are urged to move beyond reactive bans or detection measures and instead adopt values-led, strategic approaches to GenAI integration. This involves embedding AI literacy across curricula, supporting staff development, and redesigning assessments to focus on authentic, process-based demonstrations of learning. Ethical considerations are central: ensuring equity of access, safeguarding privacy, addressing sustainability, and clarifying boundaries of responsible use. GenAI is framed as a general-purpose technology—akin to the internet or electricity—that will transform higher education in profound and ongoing ways.

Key Points

  • GenAI is a general-purpose technology reshaping teaching and learning.
  • Opportunities include personalised learning, enhanced creativity, and staff efficiency.
  • Risks involve misinformation, bias, overreliance, and compromised academic integrity.
  • Detection tools are unreliable; focus should shift to assessment redesign.
  • AI literacy is essential for both staff and students across disciplines.
  • Equity and access must be prioritised to avoid deepening divides.
  • Ethical frameworks should guide responsible, transparent use of GenAI.
  • Sustainability concerns highlight the energy and resource costs of AI.
  • Institutional strategy must integrate GenAI into digital transformation plans.
  • Faculty development and sector-wide collaboration are critical for adaptation.

Conclusion

The report concludes that generative AI is no passing trend but a structural shift in higher education. Its potential to augment teaching and learning is significant, but only if institutions adopt proactive, ethical, and pedagogically grounded approaches. Success lies not in resisting GenAI, but in reimagining educational practices so that students and staff can use it critically, creatively, and responsibly.

Keywords

URL

https://library.educause.edu/resources/2025/5/2025-educause-horizon-report-teaching-and-learning-edition

Summary generated by ChatGPT 5