What We Must Do About AI In Education

By Dr Eamon Costello, Associate Professor of Digital Learning at Dublin City University
Estimated reading time: 9 minutes
Donald Trump shaking hands with Satya Nadella while Geoff Bezos and Tim Cook look on.
Donald Trump shaking hands with Satya Nadella while Geoff Bezos and Tim Cook look on.

“Can you believe that Somalia – they turned out to be higher IQ than we thought.
I always say these are low-IQ people.”
– Donald J Trump, January 3rd, 2026

Should we learn with AI?

The Manifesto for Generative AI in Higher Education by Hazel Farrell and Ken McCarthy (2025) is a text composed of 30 propositional statements. It is provocative in the sense that the reader is challenged, on some level, to either agree or disagree with each statement and will likely experience a mix of emotional responses, according to how each statement either affirms or affronts their current beliefs about AI. Here, I respond to one of the statements with which I disagree.

Most of the statements take the form: x is y, or x does y. Only two are explicitly directive, involving normative or prescriptive statements, i.e. should/must. One of these statements is:

“Students must learn with GenAI before they can question it.”

This particular statement is as far as the text goes as a whole towards saying what should be done about AI in a prescriptive sense, i.e. in this case, that it should be used. The implication is that students cannot have a valid opinion on AI without first using it (or, as it is framed here, “learning with it”). This could be seen, however, to preclude certain forms of learning. Reading about something, or hearing an argument about it, may arguably be as valid a form of educational experience as picking up a thing and using it. Moreover, if we use something, it does not always follow that we then understand it, or what we were doing with it (nor indeed what it might have been doing to us). In discussions about AI, an experiential element is sometimes offered as both an uncomplicated requisite and a simultaneous cause of learning.

Another critique of this framing is that people could potentially be forced to use harmful tools. For example, I have heard that Grok is a harmful tool and that it has been used to create deep fakes, explicit and pornographic, non-consensual pictures of women and children. I have never tried it myself. Do I need to create a Grok account and make pedophilic images before I have an opinion on whether this tool is useful or not, before I can question it?

This may seem an extreme example of AI harms, but it is worth considering that when we talk about GenAI, we are not usually talking about educational technologies carefully designed for students. Rather, we mostly mean general-purpose consumer products, whose long-term effects upon learning, knowledge production and education are as yet unknown. This, at least, is the opinion of a group of students from California State University – an institution which has conducted one of the highest-profile rollouts of GenAI (ChatGPT) in higher education. The students petitioned the university to  “cancel its contract with OpenAI and to use the savings to protect jobs at CSU campuses facing layoffs”. Their stance aligns with warnings from some researchers that exposure to smoking, asbestos and social media were actively encouraged, before we realised their harms. See, Guest et al (2025), whose paper Against the uncritical adoption of AI in Education gives examples of this type of framing of AI.

From Consentless Technologies to AI-Nothing

At the moment, we are staring in sadness, horror and denial at the USA’s descent into autocracy and the deeply racist and harmful ideas and actions of its government. For example, in a recent address at Davos, US President Donald Trump mocked the country of Somalia and talked about the “low-IQ” of Somali people. This was not widely reported, which begs the question as to whether such statements are now deemed so normal and un-newsworthy that we have accepted one of the most powerful people in the world, as one of the most racist. This person is the leader of the country from which we currently import all our GenAI technology for education. The USA is AI’s primary regulator (Rice, Quintana, & Alexandrou, 2025) and ideological driver. Its dominant cultural values will be increasingly embedded in it.

If AI is an artefact thatcan “have politics” (Winner, 1980), it is reasonable to tak care in how we approach such technologies and the language about how we use them. AI could be leading us towards forms of Authoritarian EdTech (Costello & Gow, 2025) composed of ensembles of “consentless technologies” characterised by surveillance, displays of power and a lack of any real concern for learners beyond how their actions enrich corporations.

Consentless technologies are those we become habituated to, in our educational spaces and workplaces, that sprout new features overnight, which not-so-subtly demand that we use them: “Would you like me to write this for you ✨?”

Last year, for example, a “Homework help” feature was introduced to Google’s Chrome browser. It only activated itself when it detected that users were accessing a VLE/LMS. If they were, it prompted them to use AI to interact with the content of the course. Typical activities it could perform were summarising course content or looking up related information, but also completing course quizzes.

It is safe to say that no one has asked for the amount of pop-ups and prompts that are persistently urging us to use AI in social media, web browsers, email, and word processors. It is reasonable to pause and ask ourselves what this relentless promotion is telling us about the nature of the tools, and what they are really designed to do.

Should we learn with AI?

What then should we teach our students, and what should they learn these lessons with? Given that we are being compelled to try AI every five minutes, then learning with it does not seem like much of a rare commodity, not much of a “marketable skill”. To differentiate oneself as a graduate in a “skills marketplace”, would it not be more advantageous to have types of aptitudes, skills and competencies derived from interactions with things that are not being so aggressively pushed upon us?

What would this look like? I cannot say exactly, or at least will not give you the type of answer that can be easily fed into a machine as just another Pavlovian prompt-response set. All I can advise is that, if everyone is doing something, and you blithely copy them, well then, you are giving it your very best shot at mediocrity.

AI Nothing

Lucy Suchman (2023) has decried the uncontroversial “Thingness” of AI. And in the course of my work, I sometimes feel under pressure to think about some thing or do some thing (“what must I do or think about AI?”). But my more abiding and enduring concern is in trying to meet others, through my teaching and my writing and my research, in places of no-thing, in great spaces out beyond the end of everything. (Hopefully, I will see you there someday.)

What do I mean by this? I mean can we really learn “with AI”? Can it be there for us? Is it there? And if it is, is it all there? And if it is all there is it all there is?

It is hard not to escape the feeling that AI-everywhere and AI-anything is AI-nothing.

To be clear, I am not saying that we must not learn with AI.

Nor that we must learn with AI;

Neither with nor without AI;

Nor with and without AI.

These four propositions exhaust the possible options that could be used to clarify what I am saying we must do about AI in education (Nagarjuna, 1995).

You can decide, dear reader, whether it is helpful or unhelpful, that I am deeply committed to none of them.

References

Costello, E., & Gow, S. (2025). Authoritarian EdTech. Dialogues on Digital Society, 1(3), 302-306.https://doi.org/10.1177/29768640251377165

Cottom, T. M. (2025, March 29). The tech fantasy that powers A.I. is running on fumes. The New York Times. https://www.nytimes.com/2025/03/29/opinion/ai-tech-innovation.html

Farrell, H. & McCarthy, K.(2025).Manifesto for GenerativeAI in Higher Education: A living reflection on teaching, learning, and technology in anage of abundance.  GenAI:N3, South East Technological University https://manifesto.genain3.ie/

Guest, O., Suarez, M., Müller, B. C. N., van Meerkerk, E., Oude Groote Beverborg, A., de Haan, R., Reyes Elizondo, A., Blokpoel, M., Scharfenberg, N., Kleinherenbrink, A., Camerino, I., Woensdregt, M., Monett, D., Brown, J., Avraamidou, L., Alenda-Demoutiez, J., Hermans, F., & van Rooij, I. (2025). Against the uncritical adoption of ‘AI’ technologies in academia (Advance online publication). Zenodo. https://doi.org/10.5281/zenodo.17065099

Nagarjuna. (1995). The Fundamental Wisdom of the Middle Way: Nāgārjuna’s Mūlamadhyamakakārikā (J. L. Garfield, Trans.). Oxford University Press.

Rice, M., Quintana, R., & Alexandrou, A. (2025). Overlapping complexities regarding artificial intelligence and other advanced technologies in professional learning. Professional Development in Education, 51(3), 369–382. https://doi.org/10.1080/19415257.2025.2490350

Suchman, L. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2), 20539517231206794. Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136. https://www.jstor.org/stable/20024652

Dr Eamon Costello

Associate Professor of Digital Learning
DCU

Dr Costello is an Associate professor of Digital Learning at Dublin City University, president of the Irish Learning Technology Association and an accomplished teacher, researcher and public speaker. He is deeply curious about how we learn in different environments and is known as a creative and innovative communicator. He is concerned with how we actively shape our world so that we can have better and more humane places in which to think, work, live and learn. He is an advocate of using the right tool for the job or sometimes none at all, for not everything can be fixed or should be built.

Keywords


Universities: GenAI – There’s No Stopping, Start Shaping!

By Frances O’Donnell, Instructional Designer, ATU
Estimated reading time: 8 minutes
A group of diverse higher education professionals and students standing in front of a modern university building, engaged in a collaborative discussion around a glowing digital interface displaying the Gen.S.A.R. framework. In the foreground, traditional open books and a graduation cap are intertwined with glowing neural network nodes, symbolizing the integration of generative AI with traditional academic foundations. Image (and typos) generated by Nano Banana.
Moving from debate to action: Implementing a cross-departmental strategy to shape the future of GenAI in higher education. Image (and typos) generated by Nano Banana.

Debate continues to swing between those pushing rapid adoption and those advocating caution of GenAI, for example, panic about “AI taking over the classroom” and outrage at Big Tech’s labour practices. Both are important, but are these and other concerns causing inaction? In many cases, we are quietly watching students hand their data and their critical thinking over to the very Big Tech companies we are arguing against (while we still fly on holidays, stream on smart TVs and buy the same devices from the same companies). Pretending that GenAI in education is the one place we finally draw an ethical line, while doing nothing to make its use safer or more equitable, is not helpful. By all means, keep debating, but not at the cost of another three or four cohorts.

This opinion post suggests three things universities should address now: a minimal set of GenAI functions that should be available to staff and students, and a four-step teaching process to help lecturers rethink their role with GenAI.

Three things universities need to address now

1. Tell students and staff clearly what they can use (Déjà vu?)

Students and staff deserve clarity on which GenAI tools they have access to, what they can use them for and which ones are institutionally supported. Has your university provided this? No more grey areas or “ask your lecturer”. If people do not know this, it pushes GenAI use into secrecy. That secrecy hands more power to Big Tech to extract data and embed bias, while also quietly taking away their cognitive ability.

2. Untangle GenAI from “academic integrity”

Tightly linking GenAI to academic integrity was a mistake! It has created an endless debate about whether to permit or prohibit GenAI, which pushes use further underground. At this point, there is no real equity and no real academic integrity. Use of GenAI cannot simply be stopped, proved or disproved, so pretending otherwise, while holding endless anti‑AI discussions, will not lead to a solution. There is no putting GenAI back in the bottle!

3. Treat GenAI as a shared responsibility

GenAI affects curriculum design, assessment, student support, digital literacy, employability, libraries, disability support, IT, policy and everywhere in between. It cannot sit on the shoulders of one department or lead. Every university needs a cross‑departmental AI strategy that includes the student union, academic leads, IT, the data protection office, careers, student support, administration and teaching and learning personnel. Until leadership treats GenAI as systemic, lecturers will keep firefighting contradictions and marking assignments they know were AI-generated. Bring everyone to the table, and don’t adjourn until decisions have been made on students and staff clarity (even if this clarity is dynamic in nature – do not continue to leave them navigating this alone for another three years).

What GenAI functions should be provided

At a minimum, institutions should give safe, equitable access to:

  • A campus-licensed GenAI model
    One model for all staff and students to ask questions, draft, summarise, explain and translate text, including support for multilingual learners.
  • Multimodal creation tools
    Tools to create images, audio, video (including avatars), diagrams, code, etc., with clear ethical and legal guidance.
  • Research support tools
    Tools to support research, transcribing, coding, summaries, theme mapping, citations, etc., that reinforce critical exploration
  • Assessment and teaching design tools
    Tools to draft examples, case variations, rubrics, flashcards, questions, etc., are stored inside institutional systems.
  • Custom agents
    Staff create and share custom AI agents configured for specific purposes: subject-specific scaffolding for students, or workflow agents for planning, resource creation and content adaptation. Keep interactions within institutional systems.
  • Accessibility focused GenAI
    Tools that deliver captions, plain language rewrites, alt text and personalised study materials. Many institutions already have these in place.

Safer GenAI tools for exploration, collaboration and reflection. Now what do they do with them? This is where something like Gen.S.A.R comes in. A potential approach where staff and students explore together with GenAI, and one that is adaptable to different contexts/disciplines.

Gen.S.A.R.

Gen.S.A.R. is simply a suggested starting point; there is no magic wand, but this may help to ignite practical ideas from others. It suggests a shift from passive content delivery to constructivist and experiential learning.

  • GenAI exploration and collaborative knowledge construction
  • Scrutinise and share
  • Apply in real-world contexts with a low or no-tech approach
  • Reflect and evaluate

It keeps critical thinking, collaboration and real-world application at the centre, with GenAI as a set of tools rather than a replacement for learning. Note: GenAI is a set of tools, not a human!

Phase 1: GenAI, constructing, not copy-pasting

Students use GenAI, the lecturer, and reputable sources to explore a concept or problem linked to the learning outcomes. Lecturers guide this exploration as students work individually or in groups. With ongoing lecturer input, students may choose whether to use GenAI or other sources, but all develop an understanding of GenAI’s role in learning.

Phase 2: Scrutinise and Share

The second phase focuses on scrutinising and sharing ideas with others, not just presenting them as finished facts. Students bring GenAI outputs, reputable sources and their own thinking into dialogue. They interrogate evidence, assumptions and perspectives in groups or class discussion (social constructivism, dialogic teaching). The lecturer – the content expert – oversees this process and identifies the errors, draws attention to the errors and helps students clarify GenAI outputs.

Phase 3: Apply, low-tech, real-world

Screens step back. Students apply what they have discovered in low or no-tech ways: diagrams, mind maps, zines, prototypes, role plays, scenarios. They connect what they discovered to real contexts and show understanding through doing, making, explaining and practical application.

Phase 4: Reflect, evaluate and look forward

Students then evaluate and reflect on both their learning process and the role of GenAI. Using written, audio, video or visual reflections, they consider what they learned, how GenAI supported or distorted that learning and how this connects to their future. This reflective work, combined with artefacts from earlier phases, supports peer, self and lecturer assessment and moves us towards competency and readiness-based judgements.

Resourcing Gen.S.A.R. Yes, smaller class sizes and support would be required, but aspects of this can be implemented now (and are being implemented by some already). Time shifts to facilitation, co-learning, process feedback, and authentic evaluation (less three-thousand-word essays). This approach is not perfect but at least it’s an approach and one that draws on long‑standing learning theories, including constructivism, social constructivism, experiential learning, and traditions in inclusive and competency‑based education.

There’s No Stopping It, Time to Shape It

GenAI is not going away. Exploitative labour practices, data abuse and profit motives are real (and not exclusive to AI), and naming these harms is essential, but continuing to let these debates dominate any movement is not helpful. Universities can choose to lead (and I commend, not condemn, those who already are) with clear guidance, equitable access to safe GenAI tools and learning design. The alternative is all the risks associated with students and staff relying on personal accounts and workarounds.

For the integrity of education itself, it is time to translate debates into action. The genie is not going back in the bottle, and our profit-driven society is not only shaped by Big Tech but also by the everyday choices of those of us living privileged lives in westernised societies. It is time to be honest about our own complicity, to step out of the ivory tower and work with higher education students to navigate the impact GenAI is having on their lives right now.

Note: My views on GenAI for younger learners is very different; the suggestions here focus specifically on higher education.

Frances O’Donnell

Instructional Designer
ATU

Exploring the pros and cons of AI & GenAI in education, and indeed in society. Currently completing a Doctorate in Education with a focus on AI & Emerging Technologies.

Passionate about the potential education has to develop one’s self-confidence and self-worth, but frustrated by the fact it often does the opposite. AI has magnified our tendency to overassess and our inability to truly move away from rote learning.

Whether I’m carrying out the role of an instructional designer, or delivering workshops or researching, I think we should work together to make education a catalyst of change where learners are empowered to become confident as well as socially and environmentally conscious members of society. With or without AI, let’s change the perception of what success looks like for young people.

Keywords


How AI Is Challenging the Credibility of Some Online Courses


A digital illustration of a diploma or certificate with a prominent "CERTIFIED" seal, but the document is visibly fraying and breaking apart into digital code and pixels. A small, glowing AI chatbot icon hovers near the broken area, symbolizing the erosion of credibility. Image (and typos) generated by Nano Banana.
Questioning the digital degree: AI-generated work is forcing educators to reassess the integrity and perceived value of completion certificates for online courses. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Mohammed Estaiteyeh argues that generative AI has exposed fundamental weaknesses in asynchronous online learning, where instructors cannot observe students’ thinking or verify authorship. Traditional assessments—discussion boards, reflective posts, essays, and multimedia assignments—are now easily replaced or augmented by AI tools capable of producing personalised, citation-matched work indistinguishable from human output. Detection tools and remote proctoring offer little protection and raise serious equity and ethical issues. Estaiteyeh warns that without systemic redesign, institutions risk issuing credentials that no longer guarantee genuine learning. He advocates integrating oral exams, experiential learning with external verification, and programme-level redesign to maintain authenticity and uphold academic integrity in the AI era.

Key Points

  • Asynchronous online courses face the highest risk of undetectable AI substitution.
  • Discussion boards, reflections, essays, and even citations can be convincingly AI-generated.
  • AI detectors and remote proctoring are unreliable, inequitable, and ethically problematic.
  • Oral exams and experiential assessments offer partial safeguards but require major redesign.
  • Institutions must invest in structural change or risk turning asynchronous programmes into “credential mills.”

Keywords

URL

https://theconversation.com/how-ai-is-challenging-the-credibility-of-some-online-courses-264851

Summary generated by ChatGPT 5