AI and the Future of Universities


Source

Higher Education Policy Institute (HEPI), October 2025

Summary

This collection of essays explores how artificial intelligence—particularly generative AI (GenAI)—is reshaping the university sector across teaching, research, and administration. Contributors, including Dame Wendy Hall, Vinton Cerf, Rose Luckin, and others, argue that AI represents a profound structural shift rather than a passing technological wave. The report emphasises that universities must respond strategically, ethically, and holistically: developing AI literacy among staff and students, redesigning assessment, and embedding responsible innovation into governance and institutional strategy.

AI is portrayed as both a disruptive and creative force. It automates administrative processes, accelerates research, and transforms strategy-making, while simultaneously challenging ideas of authorship, assessment, and academic integrity. Luckin and others call for universities to foster uniquely human capacities—critical thinking, creativity, emotional intelligence, and metacognition—so that AI augments rather than replaces human intellect. Across the essays, there is strong consensus that AI literacy, ethical governance, and institutional agility are vital if universities are to remain credible and relevant in the AI era.

Key Points

  • AI literacy is now essential for all staff and students.
  • GenAI challenges traditional assessment and integrity systems.
  • Universities must act quickly but ethically in AI integration.
  • Professional services can achieve major efficiency gains through AI.
  • AI enables real-time strategy analysis and forecasting.
  • AI literacy must extend to leadership and governance structures.
  • Human intelligence—creativity, criticality, empathy—remains central.
  • Ethical frameworks and transparency are essential for trust.
  • Data maturity and infrastructure underpin successful adoption.
  • Collaboration across disciplines and sectors will shape sustainable change.

Conclusion

The report concludes that AI will redefine the university’s purpose, requiring institutions to shift from reactive adaptation to active leadership in shaping the AI future. The challenge is not simply to use AI but to ensure it strengthens human intelligence, academic integrity, and social purpose in higher education.

Keywords

URL

https://www.hepi.ac.uk/reports/right-here-right-now-new-report-on-how-ai-is-transforming-higher-education/

Summary generated by ChatGPT 5


Preparing Students for the World of Work Means Embracing an AI-Positive Culture


A vibrant, modern open-plan office setting, bustling with young professionals and students collaborating. In the foreground, a diverse group of five young adults is gathered around a table, intensely focused on glowing, interactive holographic projections emanating from the table surface, displaying data and digital interfaces. A friendly, white humanoid robot stands nearby, observing or assisting. In the background, other individuals are working at desks with computers, and various screens display data. Overlaid text reads "AI-POSITIVE WORK CULTURE: PREPARING FOR TOMORROW'S JOBS." Image (and typos) generated by Nano Banana.
To truly prepare students for tomorrow’s workforce, higher education must foster an AI-positive culture. This involves embracing artificial intelligence not as a threat, but as a transformative tool that enhances skills and creates new opportunities in the evolving world of work. Image (and typos) generated by Nano Banana.

Source

Wonkhe

Summary

Alastair Robertson argues that higher education must move beyond piecemeal experimentation with generative AI and instead embed an “AI-positive culture” across teaching, learning, and institutional practice. While universities have made progress through policies such as the Russell Group’s principles on generative AI, most remain in an exploratory phase lacking strategic coherence. Robertson highlights the growing industry demand for AI literacy—especially foundational skills like prompting and evaluating outputs—contrasting this with limited student support in universities. He advocates co-creation among students, educators, and AI, where generative tools enhance learning personalisation, assessment, and data-driven insights. To succeed, universities must invest in technology, staff development, and policy frameworks that align AI with institutional values and foster innovation through strategic leadership and partnership with industry.

Key Points

  • Industry demand for AI literacy far outpaces current higher education provision.
  • Universities remain at an early stage of AI adoption, lacking coherent strategic approaches.
  • Co-creation between students, educators, and AI can deepen engagement and improve outcomes.
  • Embedding AI requires investment in infrastructure, training, and ethical policy alignment.
  • An AI-positive culture depends on leadership, collaboration, and flexibility to adapt as technology evolves.

Keywords

URL

https://wonkhe.com/blogs/preparing-students-for-the-world-of-work-means-embracing-an-ai-positive-culture/

Summary generated by ChatGPT 5


New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University
Estimated reading time: 8 minutes
Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords


Professors Share Their Findings and Thoughts on the Use of AI in Research


Three professors (one woman, two men) sit around a large polished conference table in a modern office with bookshelves in the background. They are engaged in a discussion, with open laptops, notebooks, and coffee cups in front of them. Overlaying the scene are glowing holographic data visualizations and graphs, with the words "AI IN ACADEMIC RESEARCH: FINDINGS & PERSPECTIVES" digitally projected in the center, representing the intersection of human intellect and artificial intelligence. Image (and typos) generated by Nano Banana.
Dive into the evolving landscape of academic research as leading professors share their insights and discoveries on integrating AI tools. Explore the benefits, challenges, and future implications of artificial intelligence in scholarly pursuits. Image (and typos) generated by Nano Banana.

Source

The Cavalier Daily

Summary

At the University of Virginia, faculty across disciplines are exploring how artificial intelligence can accelerate and reshape academic research. Associate Professor Hudson Golino compares AI’s transformative potential to the introduction of electricity in universities, noting its growing use in data analysis and conceptual exploration. Economist Anton Korinek, recently named among Time’s 100 most influential in AI, evaluates where AI adds value—from text synthesis and coding to ideation—while cautioning that tasks like mathematical modelling still require human oversight. Professors Mona Sloane and Renee Cummings stress ethical transparency, inclusivity, and the need for disclosure when using AI in research, arguing that equity and critical reflection must remain at the heart of innovation.

Key Points

  • AI is increasingly used at the University of Virginia for research and analysis across disciplines.
  • Golino highlights AI’s role in improving efficiency but calls for deeper institutional understanding.
  • Korinek finds AI most effective for writing, coding, and text synthesis, less so for abstract modelling.
  • Sloane and Cummings advocate transparency, ethical use, and inclusion in AI-assisted research.
  • Faculty urge a balance between efficiency, equity, and accountability in AI’s integration into academia.

Keywords

URL

https://www.cavalierdaily.com/article/2025/10/professors-share-their-findings-and-thoughts-on-the-use-of-ai-in-research

Summary generated by ChatGPT 5


Smarter Classrooms, Real Results: How AI is Rewriting the Rules of Education


In a sleek, futuristic classroom filled with students using laptops and holographic interfaces, three educators (two female, one male) stand at the front, presenting to the class. A large, interactive screen prominently displays "SMARTER CLASSROOMS, REAL RESULTS: AI IS REWRITING THE RULES OF EDUCATION," featuring a central glowing brain icon surrounded by various AI applications like personalized learning paths, automated grading, and AI-powered assessment. Image (and typos) generated by Nano Banana.
Artificial intelligence is fundamentally “rewriting the rules of education,” ushering in an era of smarter classrooms and demonstrating tangible improvements in learning outcomes. This image envisions a dynamic, technologically advanced educational environment where AI tools enhance every aspect of teaching and learning, from personalised instruction and automated feedback to collaborative projects, ultimately delivering real and measurable results for students. Image (and typos) generated by Nano Banana.

Source

WTOP News

Summary

Will Vitka reports that artificial intelligence is transforming classrooms by saving teachers time, improving accessibility, and offering real-time personalised learning. University of Maryland professor Charles Harry describes AI as a “huge net positive” when used thoughtfully, helping educators create complex, adaptive assignments and enabling students to learn coding and data analysis more quickly. AI tools are also levelling the field for learners with disabilities and multilingual needs. However, privacy, ethical use, and over-reliance remain major concerns. Surveys show one in four teachers believe AI causes more harm than good, underscoring the need for balance between innovation and integrity.

Key Points

  • AI personalises learning and provides real-time academic feedback for students.
  • Educators using AI save up to six hours per week on administrative tasks.
  • Accessibility improves through tools like translation and voice-to-text.
  • Ethical concerns persist around cheating and student data privacy.
  • The global AI-in-education market could reach $90 billion by 2032.

Keywords

URL

https://wtop.com/education/2025/10/smarter-classrooms-real-results-how-ai-is-rewriting-the-rules-of-education/

Summary generated by ChatGPT 5