Generative AI in Higher Education Teaching and Learning: Sectoral Perspectives


Source

Higher Education Authority

Summary

This report, commissioned by the Higher Education Authority (HEA), captures sector-wide perspectives on the impact of generative AI across Irish higher education. Through ten thematic focus groups and a leadership summit, it gathered insights from academic staff, students, support personnel, and leaders. The findings show that AI is already reshaping teaching, learning, assessment, and governance, but institutional responses remain fragmented and uneven. Participants emphasised the urgent need for national coordination, values-led policies, and structured capacity-building for both staff and students.

Key cross-cutting concerns included threats to academic integrity, the fragility of current assessment practices, risks of skill erosion, and unequal access. At the same time, stakeholders recognised opportunities for AI to enhance teaching, personalise learning, support inclusion, and free staff time for higher-value educational work. A consistent theme was that AI should not be treated merely as a technical disruption but as a pedagogical and ethical challenge that requires re-examining educational purpose.

Key Points

  • Sectoral responses to AI are fragmented; coordinated national guidance is urgently needed.
  • Generative AI challenges core values of authorship, originality, and academic integrity.
  • Assessment redesign is necessary—moving towards authentic, process-focused approaches.
  • Risks include skill erosion in writing, reasoning, and information literacy if AI is overused.
  • AI literacy for staff and students must go beyond tool use to include ethics and critical thinking.
  • Ethical use of AI requires shared principles, not just compliance or detection measures.
  • Inclusion is not automatic: without deliberate design, AI risks deepening inequality.
  • Staff feel underprepared and need professional development and institutional support.
  • Infrastructure challenges extend beyond tools to governance, procurement, and policy.
  • Leadership must shape educational vision, not just manage risk or compliance.

Conclusion

Generative AI is already embedded in higher education, raising urgent questions of purpose, integrity, and equity. The consultation shows both enthusiasm and unease, but above all a readiness to engage. The report concludes that a coordinated, values-led, and inclusive approach—balancing innovation with responsibility—will be essential to ensure AI strengthens, rather than undermines, Ireland’s higher education mission.

Keywords

URL

https://hea.ie/2025/09/17/generative-ai-in-higher-education-teaching-and-learning-sectoral-perspectives/

Summary generated by ChatGPT 5


AI and the future of education. Disruptions, dilemmas and directions


Source

UNESCO

Summary

This UNESCO report provides policy guidance on integrating artificial intelligence (AI) into education systems worldwide. It stresses both the opportunities—such as personalised learning, enhanced efficiency, and expanded access—and the risks, including bias, privacy concerns, and the erosion of teacher and learner agency. The document frames AI as a powerful tool that can help address inequalities and support sustainable development, but only if implemented responsibly and inclusively.

Central to the report is the principle that AI in education must remain human-centred, promoting equity, transparency, and accountability. It highlights the importance of teacher empowerment, digital literacy, and robust governance frameworks. The guidance calls for capacity building at all levels, from policy to classroom practice, and for international cooperation to ensure that AI use aligns with ethical standards and local contexts. Ultimately, the report argues that AI should augment—not replace—human intelligence in education.

Key Points

  • AI offers opportunities for personalised learning and system efficiency.
  • Risks include bias, inequity, and privacy breaches if left unchecked.
  • AI in education must be guided by human-centred, ethical frameworks.
  • Teachers remain central; AI should support rather than replace them.
  • Digital literacy for learners and educators is essential.
  • Governance frameworks must ensure transparency and accountability.
  • Capacity building and training are critical for sustainable adoption.
  • AI should contribute to equity and inclusion, not exacerbate divides.
  • International collaboration is vital for responsible AI use in education.
  • AI’s role is to augment human intelligence, not supplant it.

Conclusion

UNESCO concludes that AI has the potential to transform education systems for the better, but only if adoption is deliberate, ethical, and values-driven. Policymakers must prioritise equity, inclusivity, and transparency while ensuring that human agency and the role of teachers remain central to education in the age of AI.

Keywords

URL

https://www.unesco.org/en/articles/ai-and-future-education-disruptions-dilemmas-and-directions

Summary generated by ChatGPT 5


AI Detectors in Education


Source

Associate Professor Mark A. Bassett

Summary

This report critically examines the use of AI text detectors in higher education, questioning their accuracy, fairness, and ethical implications. While institutions often adopt detectors as a visible response to concerns about generative AI in student work, the paper highlights that their statistical metrics (e.g., false positive/negative rates) are largely meaningless in real-world educational contexts. Human- and AI-written text cannot be reliably distinguished, making detector outputs unreliable as evidence. Moreover, reliance on detectors risks reinforcing inequities: students with access to premium AI tools are less likely to be flagged, while others face disproportionate scrutiny.

Bassett argues that AI detectors compromise fairness and transparency in academic integrity processes. Comparisons to metal detectors, smoke alarms, or door locks are dismissed as misleading, since those tools measure objective, physical phenomena with regulated standards, unlike the probabilistic guesswork of AI detectors. The report stresses that detector outputs shift the burden of proof unfairly onto students, often pressuring them into confessions or penalising them based on arbitrary markers like writing style or speed. Instead of doubling down on flawed tools, the focus should be on redesigning assessments, clarifying expectations, and upholding procedural fairness.

Key Points

  • AI detectors appear effective but offer no reliable standard of evidence.
  • Accuracy metrics (TPR, FPR, etc.) are meaningless in practice outside controlled tests.
  • Detectors unfairly target students without addressing systemic integrity issues.
  • Reliance risks inequity: affluent or tech-savvy students can evade detection more easily.
  • Using multiple detectors or comparing student work to AI outputs reinforces bias, not evidence.
  • Analogies to locks, smoke alarms, or metal detectors are misleading and invalid.
  • Procedural fairness demands that institutions—not students—carry the burden of proof.
  • False positives have serious consequences for students, unlike benign fire alarm errors.
  • Deterrence through fear undermines trust and shifts education toward surveillance.
  • Real solutions lie in redesigning assessment practices, not deploying flawed detection tools.

Conclusion

AI detectors are unreliable, unregulated, and ethically problematic as tools for ensuring academic integrity. Rather than treating detector outputs as evidence, institutions should prioritise fairness, transparency, and assessment redesign. Ensuring that students learn and are evaluated equitably requires moving beyond technological quick fixes toward principled, values-based approaches.

Keywords

URL

https://drmarkbassett.com/assets/AI_Detectors_in_education.pdf

Summary generated by ChatGPT 5


QQI Generative Artificial Intelligence Survey Report 2025


Source

Quality and Qualifications Ireland (QQI), August 2025

Summary

This national survey captures the views of 1,229 staff and 1,005 learners across Ireland’s further, higher, and English language education sectors on their knowledge, use, and perceptions of generative AI (GenAI). The report reveals growing engagement with GenAI but also wide disparities in understanding, policy, and preparedness. Most respondents recognise AI’s transformative impact but remain uncertain about its role in assessment, academic integrity, and employability.

While over 80% of staff and learners believe GenAI will significantly change education and work over the next five years, few feel equipped to respond. Only 20% of staff and 14% of learners report access to GenAI training. Policies are inconsistent or absent, with most institutions leaving decisions on use to individual educators. Both staff and learners support transparent, declared use of GenAI but express concerns about bias, overreliance, loss of essential skills, and declining trust in qualifications. Respondents call for coherent national and institutional policies, professional development, and curriculum reform that balances innovation with integrity.

Key Points

  • 82% of respondents expect GenAI to transform learning and work within five years.
  • 63% of staff and 36% of learners believe GenAI literacy should be explicitly taught.
  • Fewer than one in five institutions currently provide structured GenAI training.
  • Policies on GenAI use are inconsistent, unclear, or absent in most institutions.
  • Over half of respondents fear skill erosion and reduced academic trust from AI use.
  • 70% of staff say assessment rules for GenAI lack clarity or consistency.
  • 83% of learners believe GenAI will change how they are assessed.
  • Staff and learners call for transparent declaration of GenAI use in assignments.
  • 61% of staff feel learners are unprepared to use GenAI responsibly in the workplace.
  • Respondents emphasise ethical governance, inclusion, and sustainable AI adoption.

Conclusion

The survey highlights a critical moment for Irish education: generative AI is already influencing learning and work, yet systems for policy, training, and ethics are lagging behind. To maintain public trust and educational relevance, QQI recommends a coordinated national response centred on transparency, AI literacy, and values-led governance that equips both learners and educators for an AI-driven future.

Keywords

URL

https://www.qqi.ie/sites/default/files/2025-08/generative-artificial-intelligence-survey-report-2025.pdf

Summary generated by ChatGPT 5


Understanding the Impacts of Generative AI Use on Children


Source

Alan Turing Institute

Summary

This report, prepared by the Alan Turing Institute with support from the LEGO Group, explores the impacts of generative AI on children aged 8–12 in the UK, alongside the views of their parents, carers, and teachers. Two large surveys were conducted: one with 780 children and their parents/carers, and another with 1,001 teachers across primary and secondary schools. The study examined how children encounter and use generative AI, how parents and teachers perceive its risks and benefits, and what this means for children’s wellbeing, learning, and creativity.

Findings show that while household use of generative AI is widespread (55%), access and awareness are uneven, being higher among wealthier families and private schools, and lower in state schools and disadvantaged groups. About 22% of children reported using generative AI, most commonly ChatGPT, for activities ranging from creating pictures to homework help. Children with additional learning needs were more likely to use AI for communication and companionship. Both children and parents who used AI themselves tended to view it positively, though parents voiced concerns about inaccuracy, inappropriate content, and reduced critical thinking. Teachers were frequent adopters—two-thirds used generative AI for lesson planning and research—and generally optimistic about its benefits for their work. However, many were uneasy about student use, particularly around academic integrity and diminished originality in schoolwork.

Key Points

  • 55% of UK households surveyed report generative AI use, with access shaped by income, region, and school type.
  • 22% of children (aged 8–12) have used generative AI; usage rises with age and is far higher in private schools.
  • ChatGPT is the most popular tool (58%), followed by Gemini and Snapchat’s “My AI.”
  • Children mainly use AI for creativity, learning, entertainment, and homework; those with additional needs use it more for communication and support.
  • 68% of child users find AI exciting; their enthusiasm strongly correlates with parents’ positive attitudes.
  • Parents are broadly optimistic (76%) but remain concerned about exposure to inappropriate or inaccurate information.
  • Teachers’ adoption is high (66%), especially for lesson planning and resource design, but often relies on personal licences.
  • Most teachers (85%) report increased productivity and confidence, though trust in AI outputs is more cautious.
  • Teachers are worried about students over-relying on AI: 57% report awareness of pupils submitting AI-generated work as their own.
  • Optimism is higher for AI as a support tool for special educational needs than for general student creativity or engagement.

Conclusion

Generative AI is already part of children’s digital lives, but access, understanding, and experiences vary widely. It sparks excitement and creativity yet raises concerns about equity, critical thinking, and integrity in education. While teachers see strong benefits for their own work, they remain divided on its value for students. The findings underline the need for clear policies, responsible design, and adult guidance to ensure AI enhances rather than undermines children’s learning and wellbeing.

Keywords

URL

https://www.turing.ac.uk/sites/default/files/2025-06/understanding_the_impacts_of_generative_ai_use_on_children_-_wp1_report.pdf

Summary generated by ChatGPT 5