2025 Review – A Shared Conversation, Built Over Time

Estimated reading time: 9 minutes

From Individual Questions to Collective Practice

Since September, the GenAI:N3 blog has hosted a weekly series of reflections exploring what generative AI means for higher education: for teaching, learning, assessment, academic identity, and institutional responsibility. Early contributions captured a sector grappling with disruption, uncertainty, and unease, asking difficult questions about trust, integrity, creativity, and control at a moment when generative AI arrived faster than policy, pedagogy, or professional development could respond.

As the series unfolded, a clear shift began to emerge. Posts moved from individual reactions and early experimentation towards more structured sense-making, discipline-specific redesign, and crucially shared learning. The introduction of communities of practice as a deliberate strategy for AI upskilling marked a turning point in the conversation: from “How do I deal with this?” to “How do we learn, adapt, and shape this together?” Taken as a whole, the series traces that journey from disruption to agency, and from isolated responses to collective practice.

What makes this series distinctive is not simply its focus on generative AI, but the diversity of voices it brings together. Contributors include academic staff, professional staff, educational developers, students, and sector partners, each writing from their own context while engaging with a set of common challenges. The result is not a single narrative, but a constellation of perspectives that reflect the complexity of teaching and learning in an AI-shaped world.

29th September 2025 – Jim O’Mahony

Something Wicked This Way Comes

The GenAI:N3 blog series opens with a deliberately unsettling provocation, asking higher education to confront the unease, disruption, and uncertainty that generative AI has introduced into teaching, assessment, and academic identity. Rather than framing AI as either saviour or villain, this piece invites a more honest reckoning with fear, denial, and institutional inertia. It sets the tone for the series by arguing that ignoring GenAI is no longer an option; what matters now is how educators respond, individually and collectively, to a technology that has already crossed the threshold into everyday academic practice.

6th October 2025 – Dr Yannis

3 Things AI Can Do for You: The No-Nonsense Guide

Building directly on that initial unease, this post grounds the conversation in pragmatism. Stripping away hype and alarmism, it focuses on concrete, immediately useful ways AI can support academic work, from sense-making to productivity. The emphasis is not on replacement but augmentation, encouraging educators to experiment cautiously, critically, and with intent. In the arc of the series, this contribution marks a shift from fear to agency, demonstrating that engagement with AI can be practical, purposeful, and aligned with professional judgement.

13th October 2025 – Sue Beckingham & Peter Hartley

New Elephants in the Generative AI Room? Acknowledging the Costs of GenAI to Develop ‘Critical AI Literacy’

As confidence in experimentation grows, this post re-introduces necessary friction by surfacing the hidden costs of generative AI. Environmental impact, labour implications, equity, and ethical responsibility are brought into sharp focus, challenging overly simplistic narratives of efficiency and innovation. The authors remind readers that responsible adoption requires confronting uncomfortable trade-offs. Within the wider series, this piece deepens the discussion, insisting that values, sustainability, and social responsibility must sit alongside pedagogical opportunity.

20th October 2025 Jonathan Sansom

Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

Here the conversation turns toward structured sense-making. Drawing on strategic and pedagogical frameworks, this post explores how educators and institutions can move beyond reactive responses to more deliberate design choices. The idea of AI as a “copilot” rather than an autonomous actor reframes the relationship between teacher, learner, and technology. In the narrative of the series, this contribution offers conceptual tools for navigating complexity, helping readers connect experimentation with strategy.

27th October 2025 – Patrick Shields

AI Adoption & Education for SMEs

Widening the lens beyond universities, this post examines AI adoption through the perspective of small and medium-sized enterprises, highlighting the skills, mindsets, and educational approaches needed to support workforce readiness. The crossover between higher education, lifelong learning, and industry becomes explicit. This piece situates GenAI not just as an academic concern, but as a societal one, reinforcing the importance of education systems that are responsive, connected, and outward-looking.

3rd November 2025 – Tadhg Blommerde

Dr Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

Returning firmly to the classroom, this reflective account explores what happens when students are encouraged to engage critically with AI rather than rely on it unthinkingly. Through curriculum design and assessment choices, learners begin to question outputs, assert their own judgement, and reclaim intellectual agency. This post is a turning point in the series, showing how thoughtful pedagogy can transform AI from a threat to academic integrity into a catalyst for deeper learning.

10th November 2025 – Brian Mulligan

AI Could Revolutionise Higher Education in a Way We Did Not Expect

This contribution steps back to consider second-order effects, arguing that the most significant impact of AI may not be efficiency or automation, but a reconfiguration of how learning, expertise, and value are understood. It challenges institutions to think beyond surface-level policy responses and to anticipate longer-term cultural shifts. Positioned mid-series, the post broadens the horizon, encouraging readers to think systemically rather than tactically.

17th November 2025 – Kerith George-Briant & Jack Hogan

This Is Not the End but a Beginning: Responding to “Something Wicked This Way Comes”

Explicitly dialoguing with the opening post, this response reframes the initial sense of threat as a starting point rather than a conclusion. The authors emphasise community, dialogue, and shared responsibility, arguing that collective reflection is essential if higher education is to navigate GenAI well. This piece reinforces one of the central through-lines of the series: that no single institution or individual has all the answers, but progress is possible through collaboration.

24th November 2025 – Bernie Goldbach

The Transformative Power of Communities of Practice in AI Upskilling for Educators

This post makes the case that the most sustainable way to build AI capability in education is not through one-off training sessions, but through communities of practice that support ongoing learning, experimentation, and shared problem-solving. It highlights how peer-to-peer dialogue helps educators move from cautious curiosity to confident, critical use of tools, while also creating space to discuss ethics, assessment, and evolving norms without judgement. Positioned within the blog series, it serves as a bridge between individual experimentation and institutional change: a reminder that AI upskilling is fundamentally social, and that collective learning structures are one of the best defences against both hype and paralysis.

1st December 2025 – Hazel Farrell et al.

Teaching the Future: How Tomorrow’s Music Educators Are Reimagining Pedagogy

Offering a discipline-specific lens, this post explores how music education is being rethought in light of AI, creativity, and emerging professional realities. Rather than diluting artistic practice, AI becomes a catalyst for re-examining what it means to teach, learn, and create. Within the series, this contribution demonstrates how GenAI conversations translate into authentic curriculum redesign, grounded in disciplinary values rather than generic solutions.

8th December 2025 – Ken McCarthy

Building the Manifesto: How We Got Here and What Comes Next

This reflective piece pulls together many of the threads running through the series, documenting the collaborative process behind the Manifesto for Generative AI in Higher Education. It positions the Manifesto not as a prescriptive policy document but as a living statement shaped by diverse voices, shared concerns, and collective aspiration. In the narrative arc, it represents a moment of synthesis, turning discussion into a shared point of reference.

15th December 2025 – Leigh Graves Wolf

Rebuilding Thought Networks in the Age of AI

Moving from frameworks to cognition, this post explores how AI is reshaping thinking itself. Rather than outsourcing thought, the author argues for intentionally rebuilding intellectual networks so that AI becomes part of, not a replacement for, human sense-making. This contribution deepens the series philosophically, reminding readers that the stakes of GenAI are as much cognitive and epistemic as they are technical.

22nd December 2025 – Frances O’Donnell

Universities: GenAI – There’s No Stopping, Start Shaping!

The series culminates with a clear call to action. Acknowledging both inevitability and responsibility, this post urges universities to move decisively from reaction to leadership. The emphasis is on shaping futures rather than resisting change, grounded in values, purpose, and public good. As a closing note, it captures the spirit of the entire series: GenAI is already here, but how it reshapes higher education remains a choice.

With Thanks – and an Invitation

This series exists because of the generosity, openness, and intellectual courage of its contributors. Each author took the time to reflect publicly, to question assumptions, to share practice, and to contribute thoughtfully to a conversation that is still very much in motion. Collectively, these posts embody the spirit of GenAI:N3 – collaborative, reflective, and committed to shaping the future of higher education with care rather than fear.

We would like to extend our sincere thanks to all who have contributed to the blog to date, and to those who have engaged with the posts through reading, sharing, and discussion. The conversation does not end here. If you are experimenting with generative AI in your teaching, supporting others to do so, grappling with its implications, or working with students as partners in this space, we warmly invite you to write a blog post of your own. Your perspective matters, and your experience can help others navigate this rapidly evolving landscape.

If you would like to contribute, please get in touch (blog@genain3.ie) we would love to hear from you.


DateTitleAuthorLink
29 September 2025Something Wicked This Way ComesJim O’Mahonyhttps://genain3.ie/something-wicked-this-way-comes/
6 October 20253 Things AI Can Do for You: The No-Nonsense GuideDr Yannishttps://genain3.ie/3-things-ai-can-do-for-you-the-no-nonsense-guide/
13 October 2025New Elephants in the Generative AI Room? Acknowledging the Costs of GenAI to Develop ‘Critical AI Literacy’Sue Beckingham & Peter Hartleyhttps://genain3.ie/new-elephants-in-the-generative-ai-room-acknowledging-the-costs-of-genai-to-develop-critical-ai-literacy/
20 October 2025Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot AgentsJonathan Sansomhttps://genain3.ie/making-sense-of-genai-in-education-from-force-analysis-to-pedagogical-copilot-agents/
27 October 2025AI Adoption & Education for SMEsPatrick Shieldshttps://genain3.ie/ai-adoption-education-for-smes/
3 November 2025Dr Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust ThemselvesTadhg Blommerdehttps://genain3.ie/dr-strange-syllabus-or-how-my-students-learned-to-mistrust-ai-and-trust-themselves/
10 November 2025AI Could Revolutionise Higher Education in a Way We Did Not ExpectBrian Mulliganhttps://genain3.ie/ai-could-revolutionise-higher-education-in-a-way-we-did-not-expect/
17 November 2025This Is Not the End but a Beginning: Responding to “Something Wicked This Way Comes”Kerith George-Briant & Jack Hoganhttps://genain3.ie/this-is-not-the-end-but-a-beginning-responding-to-something-wicked-this-way-comes/
24 November 2025The Transformative Power of Communities of Practice in AI Upskilling for EducatorsBernie Goldbachhttps://genain3.ie/the-transformative-power-of-communities-of-practice-in-ai-upskilling-for-educators/
1 December 2025Teaching the Future: How Tomorrow’s Music Educators Are Reimagining PedagogyHazel Farrell et al.https://genain3.ie/teaching-the-future-how-tomorrows-music-educators-are-reimagining-pedagogy/
8 December 2025Building the Manifesto: How We Got Here and What Comes NextKen McCarthyhttps://genain3.ie/building-the-manifesto-how-we-got-here-and-what-comes-next/
15 December 2025Rebuilding Thought Networks in the Age of AILeigh Graves Wolfhttps://genain3.ie/rebuilding-thought-networks-in-the-age-of-ai/
22 December 2025Universities: GenAI – There’s No Stopping, Start Shaping!Frances O’Donnellhttps://genain3.ie/universities-genai-theres-no-stopping-start-shaping/

Keywords


New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University
Estimated reading time: 8 minutes
Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords