Rising Use of AI in Schools Comes With Big Downsides for Students


A split image contrasting the perceived benefits and actual drawbacks of AI in education. On the left, "AI'S PROMISE" depicts a bright, modern classroom where students happily engage with holographic AI interfaces and a friendly AI avatar. On the right, "THE UNSEEN DOWNSIDES" shows a darker, more isolated classroom where students are encapsulated in individual AI pods, surrounded by icons representing "STUNTED CRITICAL THINKING," "SOCIAL ISOLATION," and "RELIANCE & PLAGIARISM," with an ominous alien-like AI figure looming in the background. Image (and typos) generated by Nano Banana.
While the integration of AI in schools holds significant promise for personalised learning, its rising use also comes with substantial, often unforeseen, downsides for students. This image starkly contrasts the idealised vision of AI in education with the potential negative realities, highlighting risks such as diminished critical thinking, increased social isolation, and an over-reliance that could foster academic dishonesty. Image (and typos) generated by Nano Banana.

Source

Education Week

Summary

A new report from the Center for Democracy and Technology warns that the rapid adoption of AI in schools is undermining students’ relationships, critical thinking and data privacy. In 2024–25, 85 % of teachers and 86 % of students used AI, yet fewer than half received any formal training. The report highlights emotional disconnection, weaker research skills and risks like data breaches and tech-fuelled bullying. While educators acknowledge AI’s benefits for efficiency and personalised learning, experts urge schools to prioritise teacher training, AI literacy, and ethical safeguards to prevent harm. Without adequate guidance, AI could deepen inequities rather than improve learning outcomes.

Key Points

  • AI use has surged across US classrooms, with 85 % of teachers and 86 % of students using it.
  • Students report weaker connections with teachers and peers due to AI use.
  • Teachers fear declines in students’ critical thinking and authenticity.
  • Less than half of teachers and students have received AI-related training.
  • Experts call for stronger AI literacy, ethics education and policy guardrails.

Keywords

URL

https://www.edweek.org/technology/rising-use-of-ai-in-schools-comes-with-big-downsides-for-students/2025/10

Summary generated by ChatGPT 5


AI Literacy Is Just Digital and Media Literacy in Disguise


In a modern library setting, a diverse group of four students and a female professor are gathered around a glowing, interactive table displaying "AI LITERACY: DIGITAL & MEDIA LITERACY." Overhead, a holographic overlay connects "DIGITAL LITERACY," "MEDIA LITERACY" (with news icons), and "AI LITERACY SKILLS" (with brain and circuit icons), illustrating the interconnectedness of these competencies. Image (and typos) generated by Nano Banana.
This image visually argues that AI literacy is not an entirely new concept but rather an evolution or “disguise” of existing digital and media literacy skills. It highlights the interconnectedness of understanding digital tools, critically evaluating information, and navigating algorithmic influences, suggesting that foundational literacies provide a strong basis for comprehending and engaging with artificial intelligence effectively. Image (and typos) generated by Nano Banana.

Source

Psychology Today

Summary

Diana E. Graber argues that “AI literacy” is not a new concept but a continuation of long-standing digital and media literacy principles. Triggered by the April 2025 executive order Advancing Artificial Intelligence Education for American Youth, the sudden focus on AI education highlights skills schools should have been teaching all along—critical thinking, ethical awareness, and responsible participation online. Graber outlines seven core areas where digital and media literacy underpin AI understanding, including misinformation, digital citizenship, privacy, and visual literacy. She warns that without these foundations, students face growing risks such as deepfake abuse, data exploitation, and online manipulation.

Key Points

  • AI literacy builds directly on digital and media literacy foundations.
  • An executive order has made AI education a US national priority.
  • Core literacies—critical thinking, ethics, and responsibility—are vital for safe AI use.
  • Key topics include misinformation, cyberbullying, privacy, and online safety.
  • The article urges sustained digital education rather than reactionary AI hype.

Keywords

URL

https://www.psychologytoday.com/us/blog/raising-humans-in-a-digital-world/202510/ai-literacy-is-just-digital-and-media-literacy-in

Summary generated by ChatGPT 5


Something Wicked This Way Comes

by Jim O’Mahony, SFHEA – Munster Technological University

A disheveled male university professor with gray hair and glasses, wearing a tweed jacket, kneels on the floor in his office, looking under a beige armchair with a panicked expression while holding his phone like a remote. A stack of books, a globe, and a whiteboard with equations are visible. Image generated by Nano Banana
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana

I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.

The Seduction of Less Effort

Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.

Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.

Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.

Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?

Guardrails for a New Age

However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.  

Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?

Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).

Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?

Rethinking Assessment in a GenAI World

I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.

Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.

To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.

The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.

Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.

From Resistance to Fluency

Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.

I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.

Now, did anyone see where I left the remote control?

Jim O’Mahony

University Professor | Biotechnologist | Teaching & Learning Specialist
Munster Technological University

I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.


Links

Jim also runs a wonderful newsletter on LinkedIn
https://www.linkedin.com/newsletters/ai-simplified-for-educators-7366495926846210052/

Keywords


AI and the future of education. Disruptions, dilemmas and directions


Source

UNESCO

Summary

This UNESCO report provides policy guidance on integrating artificial intelligence (AI) into education systems worldwide. It stresses both the opportunities—such as personalised learning, enhanced efficiency, and expanded access—and the risks, including bias, privacy concerns, and the erosion of teacher and learner agency. The document frames AI as a powerful tool that can help address inequalities and support sustainable development, but only if implemented responsibly and inclusively.

Central to the report is the principle that AI in education must remain human-centred, promoting equity, transparency, and accountability. It highlights the importance of teacher empowerment, digital literacy, and robust governance frameworks. The guidance calls for capacity building at all levels, from policy to classroom practice, and for international cooperation to ensure that AI use aligns with ethical standards and local contexts. Ultimately, the report argues that AI should augment—not replace—human intelligence in education.

Key Points

  • AI offers opportunities for personalised learning and system efficiency.
  • Risks include bias, inequity, and privacy breaches if left unchecked.
  • AI in education must be guided by human-centred, ethical frameworks.
  • Teachers remain central; AI should support rather than replace them.
  • Digital literacy for learners and educators is essential.
  • Governance frameworks must ensure transparency and accountability.
  • Capacity building and training are critical for sustainable adoption.
  • AI should contribute to equity and inclusion, not exacerbate divides.
  • International collaboration is vital for responsible AI use in education.
  • AI’s role is to augment human intelligence, not supplant it.

Conclusion

UNESCO concludes that AI has the potential to transform education systems for the better, but only if adoption is deliberate, ethical, and values-driven. Policymakers must prioritise equity, inclusivity, and transparency while ensuring that human agency and the role of teachers remain central to education in the age of AI.

Keywords

URL

https://www.unesco.org/en/articles/ai-and-future-education-disruptions-dilemmas-and-directions

Summary generated by ChatGPT 5


Understanding the Impacts of Generative AI Use on Children


Source

Alan Turing Institute

Summary

This report, prepared by the Alan Turing Institute with support from the LEGO Group, explores the impacts of generative AI on children aged 8–12 in the UK, alongside the views of their parents, carers, and teachers. Two large surveys were conducted: one with 780 children and their parents/carers, and another with 1,001 teachers across primary and secondary schools. The study examined how children encounter and use generative AI, how parents and teachers perceive its risks and benefits, and what this means for children’s wellbeing, learning, and creativity.

Findings show that while household use of generative AI is widespread (55%), access and awareness are uneven, being higher among wealthier families and private schools, and lower in state schools and disadvantaged groups. About 22% of children reported using generative AI, most commonly ChatGPT, for activities ranging from creating pictures to homework help. Children with additional learning needs were more likely to use AI for communication and companionship. Both children and parents who used AI themselves tended to view it positively, though parents voiced concerns about inaccuracy, inappropriate content, and reduced critical thinking. Teachers were frequent adopters—two-thirds used generative AI for lesson planning and research—and generally optimistic about its benefits for their work. However, many were uneasy about student use, particularly around academic integrity and diminished originality in schoolwork.

Key Points

  • 55% of UK households surveyed report generative AI use, with access shaped by income, region, and school type.
  • 22% of children (aged 8–12) have used generative AI; usage rises with age and is far higher in private schools.
  • ChatGPT is the most popular tool (58%), followed by Gemini and Snapchat’s “My AI.”
  • Children mainly use AI for creativity, learning, entertainment, and homework; those with additional needs use it more for communication and support.
  • 68% of child users find AI exciting; their enthusiasm strongly correlates with parents’ positive attitudes.
  • Parents are broadly optimistic (76%) but remain concerned about exposure to inappropriate or inaccurate information.
  • Teachers’ adoption is high (66%), especially for lesson planning and resource design, but often relies on personal licences.
  • Most teachers (85%) report increased productivity and confidence, though trust in AI outputs is more cautious.
  • Teachers are worried about students over-relying on AI: 57% report awareness of pupils submitting AI-generated work as their own.
  • Optimism is higher for AI as a support tool for special educational needs than for general student creativity or engagement.

Conclusion

Generative AI is already part of children’s digital lives, but access, understanding, and experiences vary widely. It sparks excitement and creativity yet raises concerns about equity, critical thinking, and integrity in education. While teachers see strong benefits for their own work, they remain divided on its value for students. The findings underline the need for clear policies, responsible design, and adult guidance to ensure AI enhances rather than undermines children’s learning and wellbeing.

Keywords

URL

https://www.turing.ac.uk/sites/default/files/2025-06/understanding_the_impacts_of_generative_ai_use_on_children_-_wp1_report.pdf

Summary generated by ChatGPT 5