What We Must Do About AI In Education

By Dr Eamon Costello, Associate Professor of Digital Learning at Dublin City University
Estimated reading time: 9 minutes
Donald Trump shaking hands with Satya Nadella while Geoff Bezos and Tim Cook look on.
Donald Trump shaking hands with Satya Nadella while Geoff Bezos and Tim Cook look on.

“Can you believe that Somalia – they turned out to be higher IQ than we thought.
I always say these are low-IQ people.”
– Donald J Trump, January 3rd, 2026

Should we learn with AI?

The Manifesto for Generative AI in Higher Education by Hazel Farrell and Ken McCarthy (2025) is a text composed of 30 propositional statements. It is provocative in the sense that the reader is challenged, on some level, to either agree or disagree with each statement and will likely experience a mix of emotional responses, according to how each statement either affirms or affronts their current beliefs about AI. Here, I respond to one of the statements with which I disagree.

Most of the statements take the form: x is y, or x does y. Only two are explicitly directive, involving normative or prescriptive statements, i.e. should/must. One of these statements is:

“Students must learn with GenAI before they can question it.”

This particular statement is as far as the text goes as a whole towards saying what should be done about AI in a prescriptive sense, i.e. in this case, that it should be used. The implication is that students cannot have a valid opinion on AI without first using it (or, as it is framed here, “learning with it”). This could be seen, however, to preclude certain forms of learning. Reading about something, or hearing an argument about it, may arguably be as valid a form of educational experience as picking up a thing and using it. Moreover, if we use something, it does not always follow that we then understand it, or what we were doing with it (nor indeed what it might have been doing to us). In discussions about AI, an experiential element is sometimes offered as both an uncomplicated requisite and a simultaneous cause of learning.

Another critique of this framing is that people could potentially be forced to use harmful tools. For example, I have heard that Grok is a harmful tool and that it has been used to create deep fakes, explicit and pornographic, non-consensual pictures of women and children. I have never tried it myself. Do I need to create a Grok account and make pedophilic images before I have an opinion on whether this tool is useful or not, before I can question it?

This may seem an extreme example of AI harms, but it is worth considering that when we talk about GenAI, we are not usually talking about educational technologies carefully designed for students. Rather, we mostly mean general-purpose consumer products, whose long-term effects upon learning, knowledge production and education are as yet unknown. This, at least, is the opinion of a group of students from California State University – an institution which has conducted one of the highest-profile rollouts of GenAI (ChatGPT) in higher education. The students petitioned the university to  “cancel its contract with OpenAI and to use the savings to protect jobs at CSU campuses facing layoffs”. Their stance aligns with warnings from some researchers that exposure to smoking, asbestos and social media were actively encouraged, before we realised their harms. See, Guest et al (2025), whose paper Against the uncritical adoption of AI in Education gives examples of this type of framing of AI.

From Consentless Technologies to AI-Nothing

At the moment, we are staring in sadness, horror and denial at the USA’s descent into autocracy and the deeply racist and harmful ideas and actions of its government. For example, in a recent address at Davos, US President Donald Trump mocked the country of Somalia and talked about the “low-IQ” of Somali people. This was not widely reported, which begs the question as to whether such statements are now deemed so normal and un-newsworthy that we have accepted one of the most powerful people in the world, as one of the most racist. This person is the leader of the country from which we currently import all our GenAI technology for education. The USA is AI’s primary regulator (Rice, Quintana, & Alexandrou, 2025) and ideological driver. Its dominant cultural values will be increasingly embedded in it.

If AI is an artefact thatcan “have politics” (Winner, 1980), it is reasonable to tak care in how we approach such technologies and the language about how we use them. AI could be leading us towards forms of Authoritarian EdTech (Costello & Gow, 2025) composed of ensembles of “consentless technologies” characterised by surveillance, displays of power and a lack of any real concern for learners beyond how their actions enrich corporations.

Consentless technologies are those we become habituated to, in our educational spaces and workplaces, that sprout new features overnight, which not-so-subtly demand that we use them: “Would you like me to write this for you ✨?”

Last year, for example, a “Homework help” feature was introduced to Google’s Chrome browser. It only activated itself when it detected that users were accessing a VLE/LMS. If they were, it prompted them to use AI to interact with the content of the course. Typical activities it could perform were summarising course content or looking up related information, but also completing course quizzes.

It is safe to say that no one has asked for the amount of pop-ups and prompts that are persistently urging us to use AI in social media, web browsers, email, and word processors. It is reasonable to pause and ask ourselves what this relentless promotion is telling us about the nature of the tools, and what they are really designed to do.

Should we learn with AI?

What then should we teach our students, and what should they learn these lessons with? Given that we are being compelled to try AI every five minutes, then learning with it does not seem like much of a rare commodity, not much of a “marketable skill”. To differentiate oneself as a graduate in a “skills marketplace”, would it not be more advantageous to have types of aptitudes, skills and competencies derived from interactions with things that are not being so aggressively pushed upon us?

What would this look like? I cannot say exactly, or at least will not give you the type of answer that can be easily fed into a machine as just another Pavlovian prompt-response set. All I can advise is that, if everyone is doing something, and you blithely copy them, well then, you are giving it your very best shot at mediocrity.

AI Nothing

Lucy Suchman (2023) has decried the uncontroversial “Thingness” of AI. And in the course of my work, I sometimes feel under pressure to think about some thing or do some thing (“what must I do or think about AI?”). But my more abiding and enduring concern is in trying to meet others, through my teaching and my writing and my research, in places of no-thing, in great spaces out beyond the end of everything. (Hopefully, I will see you there someday.)

What do I mean by this? I mean can we really learn “with AI”? Can it be there for us? Is it there? And if it is, is it all there? And if it is all there is it all there is?

It is hard not to escape the feeling that AI-everywhere and AI-anything is AI-nothing.

To be clear, I am not saying that we must not learn with AI.

Nor that we must learn with AI;

Neither with nor without AI;

Nor with and without AI.

These four propositions exhaust the possible options that could be used to clarify what I am saying we must do about AI in education (Nagarjuna, 1995).

You can decide, dear reader, whether it is helpful or unhelpful, that I am deeply committed to none of them.

References

Costello, E., & Gow, S. (2025). Authoritarian EdTech. Dialogues on Digital Society, 1(3), 302-306.https://doi.org/10.1177/29768640251377165

Cottom, T. M. (2025, March 29). The tech fantasy that powers A.I. is running on fumes. The New York Times. https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html

Farrell, H. & McCarthy, K.(2025).Manifesto for GenerativeAI in Higher Education: A living reflection on teaching, learning, and technology in anage of abundance.  GenAI:N3, South East Technological University https://manifesto.genain3.ie/

Guest, O., Suarez, M., Müller, B. C. N., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the uncritical adoption of ‘AI’ technologies in academia (Advance online publication). Zenodo. https://doi.org/10.5281/zenodo.17065099

Nagarjuna. (1995). The Fundamental Wisdom of the Middle Way: Nāgārjuna’s Mūlamadhyamakakārikā (J. L. Garfield, Trans.). Oxford University Press.

Rice, M., Quintana, R., & Alexandrou, A. (2025). Overlapping complexities regarding artificial intelligence and other advanced technologies in professional learning. Professional Development in Education, 51(3), 369–382. https://doi.org/10.1080/19415257.2025.2490350

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2), 20539517231206794. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652

Dr Eamon Costello

Associate Professor of Digital Learning
DCU

Dr Costello is an Associate professor of Digital Learning at Dublin City University, president of the Irish Learning Technology Association and an accomplished teacher, researcher and public speaker. He is deeply curious about how we learn in different environments and is known as a creative and innovative communicator. He is concerned with how we actively shape our world so that we can have better and more humane places in which to think, work, live and learn. He is an advocate of using the right tool for the job or sometimes none at all, for not everything can be fixed or should be built.

Keywords


The Transformative Power of Communities of Practice in AI Upskilling for Educators

By Bernie Goldbach, RUN EU SAP Lead
Estimated reading time: 5 minutes
A diverse group of five educators collaboratively studying a glowing, holographic network of digital lines and nodes on a table, symbolizing their shared learning and upskilling in Artificial Intelligence (AI) within a modern, book-lined academic setting. Image (and typos) generated by Nano Banana.
The power of collaboration: Communities of Practice are essential for educators to collectively navigate and integrate new AI technologies, transforming teaching and learning through shared knowledge and support. Image (and typos) generated by Nano Banana.

When the N-TUTORR programme ended in Ireland, I remained seated in the main Edtech25 auditorium to hear some of the final conversations by key players. They stood at a remarkable intersection of professional development and technological innovation. And some of them issued a call to action for continued conversation, perhaps engaging with generative AI tools within a Community of Practice (CoP).

Throughout my 40 year teaching career, I have walked pathways to genuine job satisfaction that extended far beyond simple skill acquisition. In my specific case, this satisfaction emerged from the synergy between collaborative learning, pedagogical innovation, and an excitement that the uncharted territory is unfolding alongside peers who share their commitment to educational excellence.

Finding Professional Fulfillment Through Shared Learning

The journey of upskilling in generative AI feels overwhelming when undertaken in isolation. I am still looking for a structured CoP for Generativism in Education. This would be a rich vein of collective discovery. At the moment, I have three colleagues who help me develop my skills with ethical and sustainable use of AI.

Ethan Mollick, whose research at the Wharton School has illuminated the practical applications of AI in educational contexts, consistently emphasises that the most effective learning about AI tools happens through shared experimentation and peer discussion. His work demonstrates that educators who engage collaboratively with AI technologies develop more sophisticated mental models of how these tools can enhance rather than replace pedagogical expertise. This collaborative approach alleviates the anxiety many educators feel about technological change, replacing it with curiosity and professional confidence.

Mairéad Pratschke, whose work emphasises the importance of collaborative professional learning, has highlighted how communities create safe spaces where educators can experiment, fail, and succeed together without judgment. This psychological safety becomes the foundation upon which genuine professional growth occurs.

Frances O’Donnell, whose insights at major conferences have become invaluable resources for educators navigating the AI landscape, directs the most effective AI workshops I have attended. O’Donnell’s hands-on training at conferences such as CESI (https://www.cesi.ie), EDULEARN (https://iceri.org), ILTA (https://ilta.ie), and Online Educa Berlin (https://oeb.global) have illuminated the engaging features of instructional design that emerge when educators thoughtfully integrate AI tools. Her instructional design frameworks demonstrate how AI can support the creation of personalised learning pathways, adaptive assessments, and multimodal content that engages diverse learners. O’Donnell’s emphasis on the human element in AI-assisted design resonates deeply with Communities of Practice

And thanks to Frances O’Donnell, I discovered the AI assistants inside H5P.

Elevating Instructional Design Through AI-Assisted Tools

The quality of instructional design, personified by clever educators, represents the most significant leap I have made when combining AI tools with collaborative professional learning. The commercial version of H5P (https://h5p.com) has revolutionised my workflow when creating interactive educational content. The smart import feature of H5P.com complements my teaching practice. I can quickly design rich, engaging learning experiences that would previously have required specialised technical skills or significant time investments. I have discovered ways to create everything from interactive videos with embedded questions to gamified quizzes and sophisticated branching scenarios.

I hope I find a CoP in Ireland that is interested in several of the H5P workflows I have adopted. For the moment, I’m revealing these remarkable capabilities while meeting people at education events in Belgium, Spain, Portugal, and the Netherlands. It feels like I’m a town crier who has a notebook full of shared templates. I want to offer links to the interactive content that I have created with H5P AI and gain feedback from interested colleagues. But more than the conversations at the conferences, I’m interested in making real connections with educators who want to actively participate in vibrant online communities where sustained professional learning continues.

Sustaining Innovation with Community

Job satisfaction among educators has always been closely tied to their sense of efficacy and their ability to make meaningful impacts on student learning. Communities of Practice focused on AI upskilling amplify this satisfaction by creating networks of mutual support where members celebrate innovations, troubleshoot challenges, and collectively develop best practices. When an educator discovers an effective way to use AI for differentiation or assessment design, sharing that discovery with colleagues who understand the pedagogical context creates a profound sense of professional contribution.

These communities also combat the professional tension that currently faces proficient AI users. Mollick’s observations about blowback against widespread AI adoption in education reveal a critical imperative to stand together with a network that validates the quality of teaching and provides constructive feedback. When sharing with a community, individual risk-taking morphs into collective innovation, making the professional development experience inherently more satisfying and sustainable.

We need the spark of N-TUTORR inside an AI-focused Community of Practice. We need to amplify voices. Together we need to become confident navigators of innovation. We need to co-create contextually appropriate pedagogical approaches that effectively leverage AI in education.


Keywords


Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Estimated reading time: 5 minutes
A laptop displays a "Microsoft Copilot" interface with two main sections: "Force Analysis: Opportunities vs. Challenges" showing a line graph, and "Pedagogical Copilot Agent" with an icon representing a graduation cap, books, and other educational symbols. In the background, a blurred image of students in a secondary school classroom is visible. Image (and typos) generated by Nano Banana.
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.

At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.

The Forces at Play

Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:

  • The need to equip students and staff with future-ready skills.
  • Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
  • National AI strategies that frame this as an essential area for investment.
  • The promise of personalised learning and workload reduction.
  • A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.

But there are also significant restraints:

  • Ongoing academic integrity concerns.
  • GDPR and data privacy ambiguity.
  • Patchy CPD and teacher digital confidence.
  • Digital equity and access challenges.
  • The energy cost of AI at scale.
  • Polarisation of educator opinion, and staff change fatigue.

The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.

Educator-Led AI Design

One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.

That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.

So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.

Built-in Pedagogical Skills

Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:

  • Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
  • Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
  • Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
  • Encouraging subject language use – reinforcing terminology in context.
  • Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
  • Supporting responsible AI use – modelling ethical engagement and critical AI literacy.

These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.

The Case for Copilot

We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.

The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.

And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.

Lessons Learned So Far

From our pilots, a few lessons stand out:

  • Moving to an in-tenancy model was essential for trust.
  • Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
  • Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
  • AI integration is as much about culture, training, and mindset as it is about the underlying technology.

Looking Ahead

As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.

The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.

The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.

We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.

Jonathan Sansom

Director of Digital Strategy,
Hills Road Sixth Form College, Cambridge

Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…


Software / Services Used

Microsoft Copilot Studiohttps://www.microsoft.com/en-us/microsoft-365-copilot/microsoft-copilot-studio

Keywords