By Dr Eamon Costello, Associate Professor of Digital Learning at Dublin City University
Estimated reading time: 9 minutes
“Can you believe that Somalia – they turned out to be higher IQ than we thought.
I always say these are low-IQ people.”
– Donald J Trump, January 3rd, 2026
Should we learn with AI?
The Manifesto for Generative AI in Higher Education by Hazel Farrell and Ken McCarthy (2025) is a text composed of 30 propositional statements. It is provocative in the sense that the reader is challenged, on some level, to either agree or disagree with each statement and will likely experience a mix of emotional responses, according to how each statement either affirms or affronts their current beliefs about AI. Here, I respond to one of the statements with which I disagree.
Most of the statements take the form: x is y, or x does y. Only two are explicitly directive, involving normative or prescriptive statements, i.e. should/must. One of these statements is:
“Students must learn with GenAI before they can question it.”
This particular statement is as far as the text goes as a whole towards saying what should be done about AI in a prescriptive sense, i.e. in this case, that it should be used. The implication is that students cannot have a valid opinion on AI without first using it (or, as it is framed here, “learning with it”). This could be seen, however, to preclude certain forms of learning. Reading about something, or hearing an argument about it, may arguably be as valid a form of educational experience as picking up a thing and using it. Moreover, if we use something, it does not always follow that we then understand it, or what we were doing with it (nor indeed what it might have been doing to us). In discussions about AI, an experiential element is sometimes offered as both an uncomplicated requisite and a simultaneous cause of learning.
Another critique of this framing is that people could potentially be forced to use harmful tools. For example, I have heard that Grok is a harmful tool and that it has been used to create deep fakes, explicit and pornographic, non-consensual pictures of women and children. I have never tried it myself. Do I need to create a Grok account and make pedophilic images before I have an opinion on whether this tool is useful or not, before I can question it?
This may seem an extreme example of AI harms, but it is worth considering that when we talk about GenAI, we are not usually talking about educational technologies carefully designed for students. Rather, we mostly mean general-purpose consumer products, whose long-term effects upon learning, knowledge production and education are as yet unknown. This, at least, is the opinion of a group of students from California State University – an institution which has conducted one of the highest-profile rollouts of GenAI (ChatGPT) in higher education. The students petitioned the university to “cancel its contract with OpenAI and to use the savings to protect jobs at CSU campuses facing layoffs”. Their stance aligns with warnings from some researchers that exposure to smoking, asbestos and social media were actively encouraged, before we realised their harms. See, Guest et al (2025), whose paper Against the uncritical adoption of AI in Education gives examples of this type of framing of AI.
From Consentless Technologies to AI-Nothing
At the moment, we are staring in sadness, horror and denial at the USA’s descent into autocracy and the deeply racist and harmful ideas and actions of its government. For example, in a recent address at Davos, US President Donald Trump mocked the country of Somalia and talked about the “low-IQ” of Somali people. This was not widely reported, which begs the question as to whether such statements are now deemed so normal and un-newsworthy that we have accepted one of the most powerful people in the world, as one of the most racist. This person is the leader of the country from which we currently import all our GenAI technology for education. The USA is AI’s primary regulator (Rice, Quintana, & Alexandrou, 2025) and ideological driver. Its dominant cultural values will be increasingly embedded in it.
If AI is an artefact thatcan “have politics” (Winner, 1980), it is reasonable to tak care in how we approach such technologies and the language about how we use them. AI could be leading us towards forms of Authoritarian EdTech (Costello & Gow, 2025) composed of ensembles of “consentless technologies” characterised by surveillance, displays of power and a lack of any real concern for learners beyond how their actions enrich corporations.
Consentless technologies are those we become habituated to, in our educational spaces and workplaces, that sprout new features overnight, which not-so-subtly demand that we use them: “Would you like me to write this for you ✨?”
Last year, for example, a “Homework help” feature was introduced to Google’s Chrome browser. It only activated itself when it detected that users were accessing a VLE/LMS. If they were, it prompted them to use AI to interact with the content of the course. Typical activities it could perform were summarising course content or looking up related information, but also completing course quizzes.
It is safe to say that no one has asked for the amount of pop-ups and prompts that are persistently urging us to use AI in social media, web browsers, email, and word processors. It is reasonable to pause and ask ourselves what this relentless promotion is telling us about the nature of the tools, and what they are really designed to do.
Should we learn with AI?
What then should we teach our students, and what should they learn these lessons with? Given that we are being compelled to try AI every five minutes, then learning with it does not seem like much of a rare commodity, not much of a “marketable skill”. To differentiate oneself as a graduate in a “skills marketplace”, would it not be more advantageous to have types of aptitudes, skills and competencies derived from interactions with things that are not being so aggressively pushed upon us?
What would this look like? I cannot say exactly, or at least will not give you the type of answer that can be easily fed into a machine as just another Pavlovian prompt-response set. All I can advise is that, if everyone is doing something, and you blithely copy them, well then, you are giving it your very best shot at mediocrity.
AI Nothing
Lucy Suchman (2023) has decried the uncontroversial “Thingness” of AI. And in the course of my work, I sometimes feel under pressure to think about some thing or do some thing (“what must I do or think about AI?”). But my more abiding and enduring concern is in trying to meet others, through my teaching and my writing and my research, in places of no-thing, in great spaces out beyond the end of everything. (Hopefully, I will see you there someday.)
What do I mean by this? I mean can we really learn “with AI”? Can it be there for us? Is it there? And if it is, is it all there? And if it is all there is it all there is?
It is hard not to escape the feeling that AI-everywhere and AI-anything is AI-nothing.
To be clear, I am not saying that we must not learn with AI.
Nor that we must learn with AI;
Neither with nor without AI;
Nor with and without AI.
These four propositions exhaust the possible options that could be used to clarify what I am saying we must do about AI in education (Nagarjuna, 1995).
You can decide, dear reader, whether it is helpful or unhelpful, that I am deeply committed to none of them.
References
Costello, E., & Gow, S. (2025). Authoritarian EdTech. Dialogues on Digital Society, 1(3), 302-306.https://doi.org/10.1177/29768640251377165
Cottom, T. M. (2025, March 29). The tech fantasy that powers A.I. is running on fumes. The New York Times. https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html
Farrell, H. & McCarthy, K.(2025).Manifesto for GenerativeAI in Higher Education: A living reflection on teaching, learning, and technology in anage of abundance. GenAI:N3, South East Technological University https://manifesto.genain3.ie/
Guest, O., Suarez, M., Müller, B. C. N., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the uncritical adoption of ‘AI’ technologies in academia (Advance online publication). Zenodo. https://doi.org/10.5281/zenodo.17065099
Nagarjuna. (1995). The Fundamental Wisdom of the Middle Way: Nāgārjuna’s Mūlamadhyamakakārikā (J. L. Garfield, Trans.). Oxford University Press.
Rice, M., Quintana, R., & Alexandrou, A. (2025). Overlapping complexities regarding artificial intelligence and other advanced technologies in professional learning. Professional Development in Education, 51(3), 369–382. https://doi.org/10.1080/19415257.2025.2490350
Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2), 20539517231206794. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652

Dr Eamon Costello
Associate Professor of Digital Learning
DCU

