AI Could Revolutionise Higher Education in a Way We Did Not Expect

by Brian Mulligan – e-learning consultant with Universal Learning Systems (ulsystems.com)
grand, expansive, and ornate university library or academic hall with high ceilings and classical architecture. In the center, a towering, swirling helix of glowing blue digital data, code, books, and educational icons rises dramatically, representing the transformative power of AI. Around the hall, students are seated at tables with glowing laptops, and many more students are walking and interacting. Holographic projections of famous busts and academic figures are subtly integrated into the scene. The entire environment is infused with a futuristic, digital glow. Image (and typos) generated by Nano Banana.
Artificial intelligence is poised to unleash a revolution in higher education, not in the ways we’ve conventionally imagined, but through unexpected and profound transformations. This image visualises AI as a central, dynamic force reshaping academic landscapes, curriculum delivery, and the very nature of learning in universities. Image (and typos) generated by Nano Banana.

The current conversation about Artificial Intelligence (AI) in higher education primarily focuses on efficiency and impact. People talk about how AI can personalise learning, streamline administrative tasks, and help colleges “do more with less.” For decades, every new technology, from online training to MOOCs, promised a similar transformation. Generative AI certainly offers powerful tools to enhance existing processes.

However, perhaps the revolutionary potential of AI in higher education may come from a more critical and urgent pressure: its significant challenge to the integrity of academic credentials and the learning processes they are supposed to represent.

Historically, colleges haven’t had a strong incentive to completely overhaul their teaching models just because new technology arrived. Traditional lectures, established assessment methods, and the value of a physical campus have remained largely entrenched. Technology usually just served to augment existing practices, not to transform the underlying structures of teaching, learning, and accreditation.

AI, however, may be a different kind of catalyst for change.

The Integrity Challenge

AI’s ability to create human-quality text, solve complex problems, and produce creative outputs has presented a serious challenge to academic integrity. Reports show a significant rise in AI-driven cheating, with many students now routinely using these tools to complete their coursework. For a growing number of students, offloading cognitive labour, from summarising readings to generating entire essays, to AI is becoming the new norm.

This widespread and mostly undetectable cheating compromises the entire purpose of assessment: to verify genuine learning and award credible qualifications. Even students committed to authentic learning feel compromised, forced to compete against peers using AI for an unfair advantage.

Crucially, even when AI use is approved, there’s a legitimate concern that it can undermine the learning process itself. If students rely on AI for foundational tasks like summarisation and idea generation, they may bypass the essential cognitive engagement and critical thinking development. This reliance can lead to intellectual laziness, meaning the credentials universities bestow may no longer reliably signify genuine knowledge and skills. This creates an urgent imperative for institutions to act.

The Shift to Authentic Learning

While many believe we can address this just by redesigning assignments, the challenge offers, and may even require, a structural shift towards more radical educational models. These new approaches,which have been emerging to address the challenges of quality, access and cost, may also prove to be the most effective ways of addressing academic integrity challenges.

To illustrate the point, let’s look at three examples of such emerging models:

  1. Flipped Learning: Students engage with core content independently online. Valuable in-person time is then dedicated to active learning like problem-solving, discussions, and collaborative projects. Educators can directly observe the application of knowledge, allowing for a more authentic assessment of understanding.
  2. Project-Based Learning (PBL): Often seen as an integrated flipped model, PBL immerses students in complex, integrated projects over extended periods. The focus is on applying knowledge from multiple modules and independent research to solve real-world problems. These projects demand sustained, supervised engagement, creative synthesis, and complex problem-solving, capabilities that are very hard to simply outsource to AI.
  3. Work-Based Learning (WBL): A significant part of the student’s journey takes place in authentic workplace settings. The emphasis shifts entirely to the demonstrable application of skills and knowledge in genuine professional contexts, a feat AI alone cannot achieve. Assessment moves to evaluating how a student performs and reflects in their role, including how they effectively and ethically integrate AI tools professionally.

AI as the Enabler of Change

Shifting to these models isn’t easy. Can institutions afford the resources to develop rich content, intricate project designs, and robust supervisory frameworks? Creating and assessing numerous, varied, and authentic tasks requires significant time and financial investment.

This is where technology, now including AI itself, becomes the key enabler for the feasibility of these new pedagogical approaches. Learning technologies, intelligently deployed, can help by:

  • Affordably Creating Content: AI tools rapidly develop diverse learning materials, including texts, videos and formative quizzes as well as more sophisticated assessment designs.
  • Providing Automated Learning Support: AI-powered tutors and chatbots offer 24/7 support, guiding students through challenging material, which personalises the learning journey.
  • Monitoring Independent Work: Learning analytics, enhanced by AI, track student engagement and flag struggling individuals. This allows educators to provide timely, targeted human intervention.
  • Easing the Assessment Burden: Technology can streamline the heavy workload associated with more varied assignments. Simple digital tools like structured rubrics and templated feedback systems free up educator time for nuanced, human guidance.

In summary, the most significant impact of AI isn’t the familiar promise of doing things better or faster. By undermining traditional methods of learning verification through the ease of academic dishonesty, AI has created an unavoidable pressure for systemic change. It forces colleges to reconsider what they are assessing and what value their degrees truly represent.

It’s that AI, by challenging the old system so thoroughly, makes the redesign of higher education a critical necessity.

Brian Mulligan

E-learning Consultant
Universal Learning Systems (ulsystems.com)

Brian Mulligan is an e-learning consultant with Universal Learning Systems (ulsystems.com) having retired as Head of Online Learning Innovation at Atlantic Technological University in Sligo in 2022. His current interests include innovative models of higher education and the strategic use of learning technologies in higher education.


Keywords


Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


AI Adoption & Education for SMEs

by Patrick Shields – AI PhD Researcher, Munster Technological University
A female presenter stands at the head of a modern conference table, gesturing towards a large screen that displays a flowchart titled "AI ADOPTION & EDUCATION FOR SMEs." The flowchart outlines steps like "Identify Needs," "AI Tools," "Implement," and "Growth," with various icons representing different business functions. Around the table, a diverse group of seven professionals is seated, actively engaged with laptops. Holographic data visualizations and icons float above their desks, symbolizing the integration of AI. A humanoid robot stands in the background to the right, emphasizing the AI theme. The room has large windows overlooking a city, and bookshelves line the walls. Image (and typos) generated by Nano Banana.
Small and Medium-sized Enterprises (SMEs) are increasingly looking to leverage AI, but successful adoption requires proper education and strategic integration. This image represents the crucial need for training and understanding to empower SMEs to harness AI for business growth and innovation. Image (and typos) generated by Nano Banana.

Aligning National AI Goals With Local Business Realities

As third-level institutions launch AI courses across multiple disciplines this semester, there is a unique opportunity to support an essential business cohort in this country: the small to medium-sized enterprise (SMEs). In Ireland, SMEs account for over 99% of all businesses according to the Central Statistics Office. They also happen to be struggling in AI adoption in comparison to their multinational counterparts.

Recent research has outlined how SMEs are adopting AI in a piecemeal and fragmented fashion, with just 10% possessing any AI strategy at all. Not having a strategy may indicate an absence of policy, and therein lies a significant communication issue at the heart of the AI adoption challenge. Further insights describe how four out of five business leaders believe AI is being used within their companies with little to no guardrails. This presents a significant challenge to Ireland’s National AI Strategy, which was originally published in 2021 but has since been updated to include several initiatives, such as the objective of establishing an AI awareness campaign for SMEs. The Government recognises that to achieve the original goal of 75% of all businesses embracing AI by 2030, and all the investment that this will encourage, it will be essential to address the gap between SMEs and their multinational counterparts. Perhaps these endeavours can be supported at third-level, especially given the percentage of businesses that fall into the SME bracket and the demand for upskilling.

Turning AI Potential Into Practical Know-How

Having spent the summer months of 2025 meeting businesses as part of a Chamber of Commerce AI mentoring initiative in the South East of Ireland, I believe there is a significant education gap here in which third-level institutions could assist in meeting. It became clear that the business representatives that I spoke to had serious questions about how to properly commence and embrace their AI journeys. For them, it wasn’t about the technical element because many of their existing programs and applications were adding AI features and introducing new AI-enabled tools which they could easily access. The prominent issue was in managing the process of deploying the technology in a way that matched employee buy-in with integrated, sustained & appropriate usage for maximum benefit. They require frameworks and education to roll this out effectively.

A real-world story:

As I returned to my AI adoption PhD studies this Autumn, I had the pleasure of meeting a part-time student who was employed by a local tech company in Cork. He wished to share the story of an AI initiative his employer had embarked upon, which left him feeling anxious. The company had rolled out an AI meeting transcription tool to the surprise of its employees. There had been no prior communication about its deployment, and the tool was now in active use inside the organisation. This particular student felt that the AI was useful but had its limitations, such as, for example, not being able to identify meeting speakers on the AI-generated meeting transcripts. He had his doubts as to whether the tool would stay in use at his workplace in the future, and he had not received any policy documents related to its correct handling. He also stated that he was not aware if the organisation had an AI strategy, and the manner in which the technology had been integrated into daily operations had left him and his colleagues feeling quite uneasy. He felt that the team would have benefited enormously from some communication before and during the rollout. This same student was looking to commence a course in effective AI adoption and proclaimed his belief that the industry was crying out for more training and development in this area.

The above tale of potentially failing deployment is unfortunately not an isolated case. Reports in the US have shown that up to 95% of AI pilots are failing before they ever make it to full production inside organisations. There may be many complex reasons for this, but one must certainly be the lack of understanding of the cultural impact of such change on teams, compounded by many examples of inadequate communication. It appears to me that despite the global investment in technology and the genuine intention to embrace AI, organisations continue to struggle with the employee education aspect of this transformation. If employers will prioritise training and development in partnership with education providers, they may dramatically increase their chances of success. This may include the establishment of joint frameworks for AI deployment and management with educational courses aligned to emerging business needs. 

In adopting a people development approach, companies may not only improve the chances of AI pilot success but will foster trust, alignment and buy-in. Surely this is the real promise of AI, a better, brighter organisational future, starting this winter, where your greatest asset – your people, are not left completely out in the cold and supported by higher education.

Links

https://www.irishtimes.com/business/economy/2024/08/19/smes-account-for-998-of-all-businesses-in-ireland-and-employ-two-thirds-of-workforce/

https://www.tcd.ie/news_events/articles/2025/ai-expected-to-add-250bn-to-irelands-economy-by-2035

https://enterprise.gov.ie/en/publications/national-ai-strategy.html

https://enterprise.gov.ie/en/publications/national-ai-strategy-refresh-2024.html

https://www.forbes.com/sites/andreahill/2025/08/21/why-95-of-ai-pilots-fail-and-what-business-leaders-should-do-instead

Patrick Shields

AI PhD Researcher
Munster Technological University

I am a PhD Researcher at Munster Technological University, researching how small and medium-sized businesses adopt Artificial Intelligence with a particular focus on the human, strategic and organisational dynamics involved. My work looks beyond the technical layer, exploring how AI can be introduced in practical, low-friction ways that support real business outcomes.


Keywords


Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
A laptop displays a "Microsoft Copilot" interface with two main sections: "Force Analysis: Opportunities vs. Challenges" showing a line graph, and "Pedagogical Copilot Agent" with an icon representing a graduation cap, books, and other educational symbols. In the background, a blurred image of students in a secondary school classroom is visible. Image (and typos) generated by Nano Banana.
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.

At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.

The Forces at Play

Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:

  • The need to equip students and staff with future-ready skills.
  • Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
  • National AI strategies that frame this as an essential area for investment.
  • The promise of personalised learning and workload reduction.
  • A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.

But there are also significant restraints:

  • Ongoing academic integrity concerns.
  • GDPR and data privacy ambiguity.
  • Patchy CPD and teacher digital confidence.
  • Digital equity and access challenges.
  • The energy cost of AI at scale.
  • Polarisation of educator opinion, and staff change fatigue.

The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.

Educator-Led AI Design

One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.

That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.

So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.

Built-in Pedagogical Skills

Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:

  • Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
  • Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
  • Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
  • Encouraging subject language use – reinforcing terminology in context.
  • Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
  • Supporting responsible AI use – modelling ethical engagement and critical AI literacy.

These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.

The Case for Copilot

We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.

The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.

And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.

Lessons Learned So Far

From our pilots, a few lessons stand out:

  • Moving to an in-tenancy model was essential for trust.
  • Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
  • Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
  • AI integration is as much about culture, training, and mindset as it is about the underlying technology.

Looking Ahead

As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.

The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.

The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.

We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.

Jonathan Sansom

Director of Digital Strategy,
Hills Road Sixth Form College, Cambridge

Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…


Software / Services Used

Microsoft Copilot Studiohttps://www.microsoft.com/en-us/microsoft-365-copilot/microsoft-copilot-studio

Keywords


New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University

Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords