AI and the future of education. Disruptions, dilemmas and directions


Source

UNESCO

Summary

This UNESCO report provides policy guidance on integrating artificial intelligence (AI) into education systems worldwide. It stresses both the opportunities—such as personalised learning, enhanced efficiency, and expanded access—and the risks, including bias, privacy concerns, and the erosion of teacher and learner agency. The document frames AI as a powerful tool that can help address inequalities and support sustainable development, but only if implemented responsibly and inclusively.

Central to the report is the principle that AI in education must remain human-centred, promoting equity, transparency, and accountability. It highlights the importance of teacher empowerment, digital literacy, and robust governance frameworks. The guidance calls for capacity building at all levels, from policy to classroom practice, and for international cooperation to ensure that AI use aligns with ethical standards and local contexts. Ultimately, the report argues that AI should augment—not replace—human intelligence in education.

Key Points

  • AI offers opportunities for personalised learning and system efficiency.
  • Risks include bias, inequity, and privacy breaches if left unchecked.
  • AI in education must be guided by human-centred, ethical frameworks.
  • Teachers remain central; AI should support rather than replace them.
  • Digital literacy for learners and educators is essential.
  • Governance frameworks must ensure transparency and accountability.
  • Capacity building and training are critical for sustainable adoption.
  • AI should contribute to equity and inclusion, not exacerbate divides.
  • International collaboration is vital for responsible AI use in education.
  • AI’s role is to augment human intelligence, not supplant it.

Conclusion

UNESCO concludes that AI has the potential to transform education systems for the better, but only if adoption is deliberate, ethical, and values-driven. Policymakers must prioritise equity, inclusivity, and transparency while ensuring that human agency and the role of teachers remain central to education in the age of AI.

Keywords

URL

https://www.unesco.org/en/articles/ai-and-future-education-disruptions-dilemmas-and-directions

Summary generated by ChatGPT 5


Understanding the Impacts of Generative AI Use on Children


Source

Alan Turing Institute

Summary

This report, prepared by the Alan Turing Institute with support from the LEGO Group, explores the impacts of generative AI on children aged 8–12 in the UK, alongside the views of their parents, carers, and teachers. Two large surveys were conducted: one with 780 children and their parents/carers, and another with 1,001 teachers across primary and secondary schools. The study examined how children encounter and use generative AI, how parents and teachers perceive its risks and benefits, and what this means for children’s wellbeing, learning, and creativity.

Findings show that while household use of generative AI is widespread (55%), access and awareness are uneven, being higher among wealthier families and private schools, and lower in state schools and disadvantaged groups. About 22% of children reported using generative AI, most commonly ChatGPT, for activities ranging from creating pictures to homework help. Children with additional learning needs were more likely to use AI for communication and companionship. Both children and parents who used AI themselves tended to view it positively, though parents voiced concerns about inaccuracy, inappropriate content, and reduced critical thinking. Teachers were frequent adopters—two-thirds used generative AI for lesson planning and research—and generally optimistic about its benefits for their work. However, many were uneasy about student use, particularly around academic integrity and diminished originality in schoolwork.

Key Points

  • 55% of UK households surveyed report generative AI use, with access shaped by income, region, and school type.
  • 22% of children (aged 8–12) have used generative AI; usage rises with age and is far higher in private schools.
  • ChatGPT is the most popular tool (58%), followed by Gemini and Snapchat’s “My AI.”
  • Children mainly use AI for creativity, learning, entertainment, and homework; those with additional needs use it more for communication and support.
  • 68% of child users find AI exciting; their enthusiasm strongly correlates with parents’ positive attitudes.
  • Parents are broadly optimistic (76%) but remain concerned about exposure to inappropriate or inaccurate information.
  • Teachers’ adoption is high (66%), especially for lesson planning and resource design, but often relies on personal licences.
  • Most teachers (85%) report increased productivity and confidence, though trust in AI outputs is more cautious.
  • Teachers are worried about students over-relying on AI: 57% report awareness of pupils submitting AI-generated work as their own.
  • Optimism is higher for AI as a support tool for special educational needs than for general student creativity or engagement.

Conclusion

Generative AI is already part of children’s digital lives, but access, understanding, and experiences vary widely. It sparks excitement and creativity yet raises concerns about equity, critical thinking, and integrity in education. While teachers see strong benefits for their own work, they remain divided on its value for students. The findings underline the need for clear policies, responsible design, and adult guidance to ensure AI enhances rather than undermines children’s learning and wellbeing.

Keywords

URL

https://www.turing.ac.uk/sites/default/files/2025-06/understanding_the_impacts_of_generative_ai_use_on_children_-_wp1_report.pdf

Summary generated by ChatGPT 5