What We Must Do About AI In Education

By Dr Eamon Costello, Associate Professor of Digital Learning at Dublin City University
Estimated reading time: 9 minutes
A diverse group of students and educators in a modern, high-tech classroom with a city skyline view. They are gathered around a glowing digital table displaying interactive data, while a central holographic display above them reads "AI IN EDUCATION" surrounded by floating icons representing global connectivity, learning, and technology. Image (and typos) generated by Nano Banana.
Navigating the future: Students and educators collaborating with integrated AI tools to redefine the modern learning experience. Image (and typos) generated by Nano Banana.

“Can you believe that Somalia – they turned out to be higher IQ than we thought.
I always say these are low-IQ people.”
– Donald J Trump, January 3rd, 2026

Should we learn with AI?

The Manifesto for Generative AI in Higher Education by Hazel Farrell and Ken McCarthy (2025) is a text composed of 30 propositional statements. It is provocative in the sense that the reader is challenged, on some level, to either agree or disagree with each statement and will likely experience a mix of emotional responses, according to how each statement either affirms or affronts their current beliefs about AI. Here, I respond to one of the statements with which I disagree.

Most of the statements take the form: x is y, or x does y. Only two are explicitly directive, involving normative or prescriptive statements, i.e. should/must. One of these statements is:

“Students must learn with GenAI before they can question it.”

This particular statement is as far as the text goes as a whole towards saying what should be done about AI in a prescriptive sense, i.e. in this case, that it should be used. The implication is that students cannot have a valid opinion on AI without first using it (or, as it is framed here, “learning with it”). This could be seen, however, to preclude certain forms of learning. Reading about something, or hearing an argument about it, may arguably be as valid a form of educational experience as picking up a thing and using it. Moreover, if we use something, it does not always follow that we then understand it, or what we were doing with it (nor indeed what it might have been doing to us). In discussions about AI, an experiential element is sometimes offered as both an uncomplicated requisite and a simultaneous cause of learning.

Another critique of this framing is that people could potentially be forced to use harmful tools. For example, I have heard that Grok is a harmful tool and that it has been used to create deep fakes, explicit and pornographic, non-consensual pictures of women and children. I have never tried it myself. Do I need to create a Grok account and make pedophilic images before I have an opinion on whether this tool is useful or not, before I can question it?

This may seem an extreme example of AI harms, but it is worth considering that when we talk about GenAI, we are not usually talking about educational technologies carefully designed for students. Rather, we mostly mean general-purpose consumer products, whose long-term effects upon learning, knowledge production and education are as yet unknown. This, at least, is the opinion of a group of students from California State University – an institution which has conducted one of the highest-profile rollouts of GenAI (ChatGPT) in higher education. The students petitioned the university to  “cancel its contract with OpenAI and to use the savings to protect jobs at CSU campuses facing layoffs”. Their stance aligns with warnings from some researchers that exposure to smoking, asbestos and social media were actively encouraged, before we realised their harms. See, Guest et al (2025), whose paper Against the uncritical adoption of AI in Education gives examples of this type of framing of AI.

From Consentless Technologies to AI-Nothing

At the moment, we are staring in sadness, horror and denial at the USA’s descent into autocracy and the deeply racist and harmful ideas and actions of its government. For example, in a recent address at Davos, US President Donald Trump mocked the country of Somalia and talked about the “low-IQ” of Somali people. This was not widely reported, which begs the question as to whether such statements are now deemed so normal and un-newsworthy that we have accepted one of the most powerful people in the world, as one of the most racist. This person is the leader of the country from which we currently import all our GenAI technology for education. The USA is AI’s primary regulator (Rice, Quintana, & Alexandrou, 2025) and ideological driver. Its dominant cultural values will be increasingly embedded in it.

If AI is an artefact thatcan “have politics” (Winner, 1980), it is reasonable to tak care in how we approach such technologies and the language about how we use them. AI could be leading us towards forms of Authoritarian EdTech (Costello & Gow, 2025) composed of ensembles of “consentless technologies” characterised by surveillance, displays of power and a lack of any real concern for learners beyond how their actions enrich corporations.

Consentless technologies are those we become habituated to, in our educational spaces and workplaces, that sprout new features overnight, which not-so-subtly demand that we use them: “Would you like me to write this for you ✨?”

Last year, for example, a “Homework help” feature was introduced to Google’s Chrome browser. It only activated itself when it detected that users were accessing a VLE/LMS. If they were, it prompted them to use AI to interact with the content of the course. Typical activities it could perform were summarising course content or looking up related information, but also completing course quizzes.

It is safe to say that no one has asked for the amount of pop-ups and prompts that are persistently urging us to use AI in social media, web browsers, email, and word processors. It is reasonable to pause and ask ourselves what this relentless promotion is telling us about the nature of the tools, and what they are really designed to do.

Should we learn with AI?

What then should we teach our students, and what should they learn these lessons with? Given that we are being compelled to try AI every five minutes, then learning with it does not seem like much of a rare commodity, not much of a “marketable skill”. To differentiate oneself as a graduate in a “skills marketplace”, would it not be more advantageous to have types of aptitudes, skills and competencies derived from interactions with things that are not being so aggressively pushed upon us?

What would this look like? I cannot say exactly, or at least will not give you the type of answer that can be easily fed into a machine as just another Pavlovian prompt-response set. All I can advise is that, if everyone is doing something, and you blithely copy them, well then, you are giving it your very best shot at mediocrity.

AI Nothing

Lucy Suchman (2023) has decried the uncontroversial “Thingness” of AI. And in the course of my work, I sometimes feel under pressure to think about some thing or do some thing (“what must I do or think about AI?”). But my more abiding and enduring concern is in trying to meet others, through my teaching and my writing and my research, in places of no-thing, in great spaces out beyond the end of everything. (Hopefully, I will see you there someday.)

What do I mean by this? I mean can we really learn “with AI”? Can it be there for us? Is it there? And if it is, is it all there? And if it is all there is it all there is?

It is hard not to escape the feeling that AI-everywhere and AI-anything is AI-nothing.

To be clear, I am not saying that we must not learn with AI.

Nor that we must learn with AI;

Neither with nor without AI;

Nor with and without AI.

These four propositions exhaust the possible options that could be used to clarify what I am saying we must do about AI in education (Nagarjuna, 1995).

You can decide, dear reader, whether it is helpful or unhelpful, that I am deeply committed to none of them.

References

Costello, E., & Gow, S. (2025). Authoritarian EdTech. Dialogues on Digital Society, 1(3), 302-306.https://doi.org/10.1177/29768640251377165

Cottom, T. M. (2025, March 29). The tech fantasy that powers A.I. is running on fumes. The New York Times. https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html

Farrell, H. & McCarthy, K.(2025).Manifesto for GenerativeAI in Higher Education: A living reflection on teaching, learning, and technology in anage of abundance.  GenAI:N3, South East Technological University https://manifesto.genain3.ie/

Guest, O., Suarez, M., Müller, B. C. N., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the uncritical adoption of ‘AI’ technologies in academia (Advance online publication). Zenodo. https://doi.org/10.5281/zenodo.17065099

Nagarjuna. (1995). The Fundamental Wisdom of the Middle Way: Nāgārjuna’s Mūlamadhyamakakārikā (J. L. Garfield, Trans.). Oxford University Press.

Rice, M., Quintana, R., & Alexandrou, A. (2025). Overlapping complexities regarding artificial intelligence and other advanced technologies in professional learning. Professional Development in Education, 51(3), 369–382. https://doi.org/10.1080/19415257.2025.2490350

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2), 20539517231206794. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652

Dr Eamon Costello

Associate Professor of Digital Learning
DCU

Dr Costello is an Associate professor of Digital Learning at Dublin City University, president of the Irish Learning Technology Association and an accomplished teacher, researcher and public speaker. He is deeply curious about how we learn in different environments and is known as a creative and innovative communicator. He is concerned with how we actively shape our world so that we can have better and more humane places in which to think, work, live and learn. He is an advocate of using the right tool for the job or sometimes none at all, for not everything can be fixed or should be built.

Keywords


2025 Review – A Shared Conversation, Built Over Time

Estimated reading time: 9 minutes

From Individual Questions to Collective Practice

Since September, the GenAI:N3 blog has hosted a weekly series of reflections exploring what generative AI means for higher education: for teaching, learning, assessment, academic identity, and institutional responsibility. Early contributions captured a sector grappling with disruption, uncertainty, and unease, asking difficult questions about trust, integrity, creativity, and control at a moment when generative AI arrived faster than policy, pedagogy, or professional development could respond.

As the series unfolded, a clear shift began to emerge. Posts moved from individual reactions and early experimentation towards more structured sense-making, discipline-specific redesign, and crucially shared learning. The introduction of communities of practice as a deliberate strategy for AI upskilling marked a turning point in the conversation: from “How do I deal with this?” to “How do we learn, adapt, and shape this together?” Taken as a whole, the series traces that journey from disruption to agency, and from isolated responses to collective practice.

What makes this series distinctive is not simply its focus on generative AI, but the diversity of voices it brings together. Contributors include academic staff, professional staff, educational developers, students, and sector partners, each writing from their own context while engaging with a set of common challenges. The result is not a single narrative, but a constellation of perspectives that reflect the complexity of teaching and learning in an AI-shaped world.

29th September 2025 – Jim O’Mahony

Something Wicked This Way Comes

The GenAI:N3 blog series opens with a deliberately unsettling provocation, asking higher education to confront the unease, disruption, and uncertainty that generative AI has introduced into teaching, assessment, and academic identity. Rather than framing AI as either saviour or villain, this piece invites a more honest reckoning with fear, denial, and institutional inertia. It sets the tone for the series by arguing that ignoring GenAI is no longer an option; what matters now is how educators respond, individually and collectively, to a technology that has already crossed the threshold into everyday academic practice.

6th October 2025 – Dr Yannis

3 Things AI Can Do for You: The No-Nonsense Guide

Building directly on that initial unease, this post grounds the conversation in pragmatism. Stripping away hype and alarmism, it focuses on concrete, immediately useful ways AI can support academic work, from sense-making to productivity. The emphasis is not on replacement but augmentation, encouraging educators to experiment cautiously, critically, and with intent. In the arc of the series, this contribution marks a shift from fear to agency, demonstrating that engagement with AI can be practical, purposeful, and aligned with professional judgement.

13th October 2025 – Sue Beckingham & Peter Hartley

New Elephants in the Generative AI Room? Acknowledging the Costs of GenAI to Develop ‘Critical AI Literacy’

As confidence in experimentation grows, this post re-introduces necessary friction by surfacing the hidden costs of generative AI. Environmental impact, labour implications, equity, and ethical responsibility are brought into sharp focus, challenging overly simplistic narratives of efficiency and innovation. The authors remind readers that responsible adoption requires confronting uncomfortable trade-offs. Within the wider series, this piece deepens the discussion, insisting that values, sustainability, and social responsibility must sit alongside pedagogical opportunity.

20th October 2025 Jonathan Sansom

Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

Here the conversation turns toward structured sense-making. Drawing on strategic and pedagogical frameworks, this post explores how educators and institutions can move beyond reactive responses to more deliberate design choices. The idea of AI as a “copilot” rather than an autonomous actor reframes the relationship between teacher, learner, and technology. In the narrative of the series, this contribution offers conceptual tools for navigating complexity, helping readers connect experimentation with strategy.

27th October 2025 – Patrick Shields

AI Adoption & Education for SMEs

Widening the lens beyond universities, this post examines AI adoption through the perspective of small and medium-sized enterprises, highlighting the skills, mindsets, and educational approaches needed to support workforce readiness. The crossover between higher education, lifelong learning, and industry becomes explicit. This piece situates GenAI not just as an academic concern, but as a societal one, reinforcing the importance of education systems that are responsive, connected, and outward-looking.

3rd November 2025 – Tadhg Blommerde

Dr Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

Returning firmly to the classroom, this reflective account explores what happens when students are encouraged to engage critically with AI rather than rely on it unthinkingly. Through curriculum design and assessment choices, learners begin to question outputs, assert their own judgement, and reclaim intellectual agency. This post is a turning point in the series, showing how thoughtful pedagogy can transform AI from a threat to academic integrity into a catalyst for deeper learning.

10th November 2025 – Brian Mulligan

AI Could Revolutionise Higher Education in a Way We Did Not Expect

This contribution steps back to consider second-order effects, arguing that the most significant impact of AI may not be efficiency or automation, but a reconfiguration of how learning, expertise, and value are understood. It challenges institutions to think beyond surface-level policy responses and to anticipate longer-term cultural shifts. Positioned mid-series, the post broadens the horizon, encouraging readers to think systemically rather than tactically.

17th November 2025 – Kerith George-Briant & Jack Hogan

This Is Not the End but a Beginning: Responding to “Something Wicked This Way Comes”

Explicitly dialoguing with the opening post, this response reframes the initial sense of threat as a starting point rather than a conclusion. The authors emphasise community, dialogue, and shared responsibility, arguing that collective reflection is essential if higher education is to navigate GenAI well. This piece reinforces one of the central through-lines of the series: that no single institution or individual has all the answers, but progress is possible through collaboration.

24th November 2025 – Bernie Goldbach

The Transformative Power of Communities of Practice in AI Upskilling for Educators

This post makes the case that the most sustainable way to build AI capability in education is not through one-off training sessions, but through communities of practice that support ongoing learning, experimentation, and shared problem-solving. It highlights how peer-to-peer dialogue helps educators move from cautious curiosity to confident, critical use of tools, while also creating space to discuss ethics, assessment, and evolving norms without judgement. Positioned within the blog series, it serves as a bridge between individual experimentation and institutional change: a reminder that AI upskilling is fundamentally social, and that collective learning structures are one of the best defences against both hype and paralysis.

1st December 2025 – Hazel Farrell et al.

Teaching the Future: How Tomorrow’s Music Educators Are Reimagining Pedagogy

Offering a discipline-specific lens, this post explores how music education is being rethought in light of AI, creativity, and emerging professional realities. Rather than diluting artistic practice, AI becomes a catalyst for re-examining what it means to teach, learn, and create. Within the series, this contribution demonstrates how GenAI conversations translate into authentic curriculum redesign, grounded in disciplinary values rather than generic solutions.

8th December 2025 – Ken McCarthy

Building the Manifesto: How We Got Here and What Comes Next

This reflective piece pulls together many of the threads running through the series, documenting the collaborative process behind the Manifesto for Generative AI in Higher Education. It positions the Manifesto not as a prescriptive policy document but as a living statement shaped by diverse voices, shared concerns, and collective aspiration. In the narrative arc, it represents a moment of synthesis, turning discussion into a shared point of reference.

15th December 2025 – Leigh Graves Wolf

Rebuilding Thought Networks in the Age of AI

Moving from frameworks to cognition, this post explores how AI is reshaping thinking itself. Rather than outsourcing thought, the author argues for intentionally rebuilding intellectual networks so that AI becomes part of, not a replacement for, human sense-making. This contribution deepens the series philosophically, reminding readers that the stakes of GenAI are as much cognitive and epistemic as they are technical.

22nd December 2025 – Frances O’Donnell

Universities: GenAI – There’s No Stopping, Start Shaping!

The series culminates with a clear call to action. Acknowledging both inevitability and responsibility, this post urges universities to move decisively from reaction to leadership. The emphasis is on shaping futures rather than resisting change, grounded in values, purpose, and public good. As a closing note, it captures the spirit of the entire series: GenAI is already here, but how it reshapes higher education remains a choice.

With Thanks – and an Invitation

This series exists because of the generosity, openness, and intellectual courage of its contributors. Each author took the time to reflect publicly, to question assumptions, to share practice, and to contribute thoughtfully to a conversation that is still very much in motion. Collectively, these posts embody the spirit of GenAI:N3 – collaborative, reflective, and committed to shaping the future of higher education with care rather than fear.

We would like to extend our sincere thanks to all who have contributed to the blog to date, and to those who have engaged with the posts through reading, sharing, and discussion. The conversation does not end here. If you are experimenting with generative AI in your teaching, supporting others to do so, grappling with its implications, or working with students as partners in this space, we warmly invite you to write a blog post of your own. Your perspective matters, and your experience can help others navigate this rapidly evolving landscape.

If you would like to contribute, please get in touch (blog@genain3.ie) we would love to hear from you.


DateTitleAuthorLink
29 September 2025Something Wicked This Way ComesJim O’Mahonyhttps://genain3.ie/something-wicked-this-way-comes/
6 October 20253 Things AI Can Do for You: The No-Nonsense GuideDr Yannishttps://genain3.ie/3-things-ai-can-do-for-you-the-no-nonsense-guide/
13 October 2025New Elephants in the Generative AI Room? Acknowledging the Costs of GenAI to Develop ‘Critical AI Literacy’Sue Beckingham & Peter Hartleyhttps://genain3.ie/new-elephants-in-the-generative-ai-room-acknowledging-the-costs-of-genai-to-develop-critical-ai-literacy/
20 October 2025Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot AgentsJonathan Sansomhttps://genain3.ie/making-sense-of-genai-in-education-from-force-analysis-to-pedagogical-copilot-agents/
27 October 2025AI Adoption & Education for SMEsPatrick Shieldshttps://genain3.ie/ai-adoption-education-for-smes/
3 November 2025Dr Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust ThemselvesTadhg Blommerdehttps://genain3.ie/dr-strange-syllabus-or-how-my-students-learned-to-mistrust-ai-and-trust-themselves/
10 November 2025AI Could Revolutionise Higher Education in a Way We Did Not ExpectBrian Mulliganhttps://genain3.ie/ai-could-revolutionise-higher-education-in-a-way-we-did-not-expect/
17 November 2025This Is Not the End but a Beginning: Responding to “Something Wicked This Way Comes”Kerith George-Briant & Jack Hoganhttps://genain3.ie/this-is-not-the-end-but-a-beginning-responding-to-something-wicked-this-way-comes/
24 November 2025The Transformative Power of Communities of Practice in AI Upskilling for EducatorsBernie Goldbachhttps://genain3.ie/the-transformative-power-of-communities-of-practice-in-ai-upskilling-for-educators/
1 December 2025Teaching the Future: How Tomorrow’s Music Educators Are Reimagining PedagogyHazel Farrell et al.https://genain3.ie/teaching-the-future-how-tomorrows-music-educators-are-reimagining-pedagogy/
8 December 2025Building the Manifesto: How We Got Here and What Comes NextKen McCarthyhttps://genain3.ie/building-the-manifesto-how-we-got-here-and-what-comes-next/
15 December 2025Rebuilding Thought Networks in the Age of AILeigh Graves Wolfhttps://genain3.ie/rebuilding-thought-networks-in-the-age-of-ai/
22 December 2025Universities: GenAI – There’s No Stopping, Start Shaping!Frances O’Donnellhttps://genain3.ie/universities-genai-theres-no-stopping-start-shaping/

Keywords