New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University
Estimated reading time: 8 minutes
Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords


The Future Learner: (Digital) Education Reimagined for 2040


Source

European Digital Education Hub (EDEH), European Commission, 2025

Summary

This foresight report explores four plausible futures for digital education in 2040, emphasising how generative and intelligent technologies could redefine learning, teaching, and human connection. Developed by the EDEH “Future Learner” squad, the study uses scenario planning to imagine how trends such as the rise of generative AI (GenAI), virtual assistance, lifelong learning, and responsible technology use might shape the education landscape. The report identifies 16 major drivers of change, highlighting GenAI’s central role in personalising learning, automating administration, and transforming the balance between human and machine intelligence.

In the most optimistic scenario – Empowered Learning – AI-powered personal assistants, immersive technologies, and data-driven systems make education highly adaptive, equitable, and learner-centred. In contrast, the Constrained Education scenario imagines over-regulated, energy-limited systems where AI use is tightly controlled, while The End of Human Knowledge portrays an AI-saturated collapse where truth, trust, and human expertise dissolve. The final Transformative Vision outlines a balanced, ethical future in which AI enhances – not replaces – human intelligence, fostering empathy, sustainability, and lifelong learning. Across all futures, the report calls for human oversight, explainability, and shared responsibility to ensure that AI in education remains ethical, inclusive, and transparent.

Key Points

  • Generative AI and intelligent systems are central to all future learning scenarios.
  • AI personal assistants, XR, and data analytics drive personalised, lifelong education.
  • Responsible use and ethical frameworks are essential to maintain human agency.
  • Overreliance on AI risks misinformation, cognitive overload, and social fragmentation.
  • Sustainability and carbon-neutral AI systems are core to educational innovation.
  • Data privacy and explainability remain critical for trust in AI-driven learning.
  • Equity and inclusion depend on access to AI-enhanced tools and digital literacy.
  • The line between human and artificial authorship will blur without strong governance.
  • Teachers evolve into mentors and facilitators supported by AI co-workers.
  • The most resilient future balances technology with human values and social purpose.

Conclusion

The Future Learner envisions 2040 as a pivotal point for digital education, where the success or failure of AI integration depends on ethical design, equitable access, and sustained human oversight. Generative AI can create unprecedented opportunities for personalisation and engagement, but only if education systems preserve their human essence – empathy, creativity, and community – amid the accelerating digital transformation.

Keywords

URL

https://ec.europa.eu/newsroom/eacea_oep/items/903368/en

Summary generated by ChatGPT 5


3 Things AI Can Do For You, The No-Nonsense Guide

by Dr Yannis
Estimated reading time: 6 minutes


Some people are for it, some people are against it, some people write guidelines about it. What unites most of these ‘factions’ is an almost total inability to use it effectively. Yes, I am talking about AI, the super useful, super intuitive thing that is changing your life by, well, generating some misspelled logos.

This blog post offers something different. Three real things you can do. I will tell you what they are, how to do them, why you should do them, and how much it will cost you. And I am not talking about the things everyone else is talking about.

I know about all this, because ever since ChatGPT launched late 2022, I have been all over it. I have incorporated AI tools into all aspects of my work (even life) and I have built an entire business on the back of it. Here we go then.

Thing Number 1: Updating Your Resources

Academics spend a great deal of time updating things. We update reading lists, handbooks, lecture notes, guides, textbooks, all of the time. Sometimes, we update material in areas we have deep expertise, sometimes we update a handbook in a module that got dumped on us because someone left.
Here is how to leverage technology to make this process faster and less painful. You will need a subscription to ChatGPT (£20+VAT, prices as of September 2025) and access to Gemini (via Google Pro subscription, currently free for a year for students and staff or £18.99/month).

Upload the material you wish to update, for example a chapter from a textbook. You select the deep thinking or research option and ask the bot to conduct a review for updates and recommend changes to your uploaded text. Once this is ready, ask it to incorporate the changes into your text, or input them manually as needed. You then take the updated document and run it through the other model, asking it to check for accuracy.

The result is an updated document, which has been updated, verified and often reworded for clarity. Both models offer references and explain their rationale, so you can verify manually and check every step of the process.

Conclusion? You have updated your resource, and it took you one morning, instead of a week

Thing Number 2: Creating Podcasts

Yes, we can lecture, we can offer tutorials, indeed we can upload pdfs on the VLE. But why stop there? Students benefit from multi-modal forms of learning. The surest way to get the message across is to deliver the same information live, via text, audio and video, multiple times.

What has been stopping us doing this effectively? The answer is that most of us are terrible in front of a camera. Yes, students may appreciate a video podcast, but if you look like the hostage in a ransom request video, you are unlikely to hold their attention.

Here is what to do. Use one of the bots you already subscribe to, ChatGPT, or Gemini (as above) or even copilot (via an institutional licence or £19/month) to turn your lecture notes, book chapter or even recordings of live sessions into a podcast script. You can select length, style, and focus to match your intended audience.

You then go to ElevenLabs ($22+VAT/month) and make a clone of your voice. This sounds scary, but it isn’t. Just one tip, do not believe the ‘5 minutes of recording and you’ll fool your mom’ spiel. You need to find quality recordings of your voice that run for a couple of hours for good, reliable results.

Once you have your voice clone, go to HeyGen ($29+VAT/month) and create your video clone. This can be done (at high spec) by either uploading a green screen recording of yourself of about 20’ or by using a new photo-to-video feature (I know this sounds unlikely, but it works very well).

You are now good to go, having cloned yourself in audio and video. You can bring the two together in HeyGen, feed it your pre-prepared scripts and bingo, you can produce unlimited videos where you narrate your lecture scripts looking like a Hollywood star, and no one needs to expect a ransom note.

Thing Number 3: Generating Student Feedback

Most of us are used to generate student feedback on the basis of proformas and templates that combine expectations on learning outcomes with marking grids and assessment criteria.

What students crave (and keep telling us in surveys such as the NSS in the UK), is personalised feedback. Hardly anyone has the time however to personalise templates to student scripts in a way that is deeply meaningful and helpful. We usually copy-paste the proforma, and stick a line extra with an invitation to ‘come and see me if you have questions’.

The bots described above (ChatGPT, Gemini, Copilot etc) do a fantastic job of adapting templates to individualised student scripts. Yes, this requires you uploading a great deal of university resource and the student scripts to the bot, so more on this below.

And Now The Small-Print

First of all, we’ve racked up quite a bill here. You must be thinking, why should I pay for all this out of my own pocket? The answer is you should not, but even if you did, it won’t be that bad. You don’t need to keep paying the subscriptions post the point you need them (all these are PAYG) and most offer free trials or discounts at the beginning of your subscription. You may even get your university to pay for some of it (for example copilot).

Secondly, aren’t there data protection and privacy concerns? Yes there are. All of the above assumes you either own the resources you are uploading, or you have permission to do so. This is a tall order for most institutions that have an unnecessarily guarded view of AI. Some concern is real. For example, I don’t like the latest iteration of Gemini’s terms which forces third party reviewers onto your content if you want full functionality.

Thirdly, won’t a kid in Texas go without a shower, every time you say thank you to ChatGPT? I choose not to care about the latter. The Americans have bigger fish to fry at the moment.

Here you have it therefore, 3 concrete things you can do, as an academic with AI, with costs explained. Bonus feature? You can use all these tools to launch your own school on social media, like I did!

Dr Ioannis Glinavos

Academic Entrepreneur

Dr Ioannis Glinavos is a legal academic and education innovator with a strong focus on the intersection of AI and EdTech. As a pioneer in integrating artificial intelligence tools into higher education, he has developed engaging, student-centred resources—particularly in law—through podcasts, video content, and interactive platforms. His work explores how emerging technologies can enhance learning outcomes and critical thinking, while keeping students engaged in synchronous and asynchronous content.


Software / Services Used

ChatGPThttps://chatgpt.com/
Google Geminihttps://gemini.google.com/app
ElevenLabs (AI Voice Generator)https://elevenlabs.io/
HeyGen (AI Video Generator)https://www.heygen.com/

Keywords


Students Who Lack Academic Confidence More Likely to Use AI


In a modern university library setting, a young female student with a concerned expression is intently focused on her laptop. A glowing holographic interface floats above her keyboard, displaying "ESSAY ASSIST," "RESEARCH BOT," and "CONFIDENCE BOOST!" with an encouraging smiley face. In the background, other students are also working on laptops. Image (and typos) generated by Nano Banana.
Research suggests a correlation between a lack of academic confidence in students and an increased likelihood of turning to AI tools for assistance. This image depicts a student utilising an AI interface offering “confidence boost” and “essay assist,” illustrating how AI can become a crutch for those feeling insecure about their abilities in the academic environment. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

A survey by Inside Higher Ed and Generation Lab finds that 85 % of students claim they’ve used generative AI for coursework in the past year. Among the habits observed, students with lower self-perceived academic competence or low confidence are more likely to lean on AI tools, especially when unsure or reluctant to ask peers or instructors for help. The study distinguishes between instrumental help-seeking (clarification, explanations) and executive help-seeking (using AI to complete work). Students who trust AI more are also more likely to use it. The authors argue that universities need clearer AI policies and stronger support structures so that students don’t feel forced into overreliance.

Key Points

  • 85 % of surveyed students reported using generative AI for coursework in the past year.
  • Students with lower academic confidence or discomfort asking peers tend to rely more on AI.
  • AI use splits into two modes: instrumental (asking questions, clarifying) vs executive (using the AI to generate or complete work).
  • Trust in AI correlates with higher usage, even controlling for other variables.
  • Many students call for clear, standardised institutional policies on AI use to reduce ambiguity.

Keywords

URL

https://www.insidehighered.com/news/student-success/academic-life/2025/09/30/students-who-lack-academic-confidence-more-likely-use

Summary generated by ChatGPT 5


2025 Horizon Action Plan: Building Skills and Literacy for Teaching with GenAI


Source

Jenay Robert, EDUCAUSE (2025)

Summary

This collection of essays explores how artificial intelligence—particularly generative AI (GenAI)—is reshaping the university sector across teaching, research, and administration. Contributors, including Dame Wendy Hall, Vinton Cerf, Rose Luckin, and others, argue that AI represents a profound structural shift rather than a passing technological wave. The report emphasises that universities must respond strategically, ethically, and holistically: developing AI literacy among staff and students, redesigning assessment, and embedding responsible innovation into governance and institutional strategy.

AI is portrayed as both a disruptive and creative force. It automates administrative processes, accelerates research, and transforms strategy-making, while simultaneously challenging ideas of authorship, assessment, and academic integrity. Luckin and others call for universities to foster uniquely human capacities—critical thinking, creativity, emotional intelligence, and metacognition—so that AI augments rather than replaces human intellect. Across the essays, there is strong consensus that AI literacy, ethical governance, and institutional agility are vital if universities are to remain credible and relevant in the AI era.

Key Points

  • GenAI is reshaping all aspects of higher education teaching and learning.
  • AI literacy must be built into curricula, staff training, and institutional culture.
  • Faculty should use GenAI to enhance creativity and connection, not replace teaching.
  • Clear, flexible policies are needed for responsible and ethical AI use.
  • Institutions must prioritise equity, inclusion, and closing digital divides.
  • Ongoing professional development in AI is essential for staff and administrators.
  • Collaboration across institutions and with industry accelerates responsible adoption.
  • Assessment and pedagogy must evolve to reflect AI’s role in learning.
  • GenAI governance should balance innovation with accountability and transparency.
  • Shared toolkits and global practice networks can scale learning and implementation.

Conclusion

The Action Plan positions GenAI as both a challenge and a catalyst for renewal in higher education. Institutions that foster literacy, ethics, and innovation will not only adapt but thrive. Teaching with AI is framed as a collective, values-led enterprise—one that keeps human connection, creativity, and critical thinking at the centre of the learning experience.

Keywords

URL

https://library.educause.edu/resources/2025/9/2025-educause-horizon-action-plan-building-skills-and-literacy-for-teaching-with-genai

Summary generated by ChatGPT 5