Student Generative AI Survey 2026


Source

Higher Education Policy Institute (HEPI), Report 199, 2026

Summary

This HEPI report presents findings from a large-scale survey of UK higher education students on their use of generative artificial intelligence (GenAI), building on earlier surveys from 2024 and 2025. It shows that GenAI use is now widespread and normalised across the student population, with most students using AI tools regularly for tasks such as explaining concepts, summarising readings, generating ideas, and supporting writing. The report highlights a shift from experimental use to embedded study practice, with students increasingly viewing GenAI as a standard academic tool rather than an optional extra.

However, the findings also reveal a complex landscape of uneven skills, uncertainty, and institutional inconsistency. While many students report benefits in efficiency and understanding, concerns persist around overreliance, accuracy, and fairness. The report notes that guidance from institutions remains variable, with students often unclear about acceptable use in assessments. Importantly, the data suggests a growing expectation that universities should actively teach students how to use GenAI effectively and ethically, rather than simply regulate or restrict it. The report underscores the need for clearer policies, improved AI literacy, and assessment redesign that reflects real-world practices.

Key Points

  • The majority of students now use GenAI regularly in their studies.
  • Common uses include explaining concepts, summarising, and drafting work.
  • GenAI is becoming embedded as a standard academic tool.
  • Students report gains in efficiency, productivity, and understanding.
  • Concerns remain about accuracy, bias, and overreliance.
  • Institutional guidance on GenAI use is inconsistent or unclear.
  • Many students are uncertain about acceptable use in assessments.
  • There is strong demand for formal AI literacy education.
  • Assessment practices are not yet aligned with widespread AI use.
  • Equity issues arise from unequal access to tools and skills.

Conclusion

The HEPI Student Generative AI Survey 2026 highlights a decisive shift: generative AI is no longer emerging but embedded in student learning. The challenge for higher education is to move from reactive policy-making to proactive educational design—equipping students with the skills, clarity, and critical awareness needed to use AI responsibly and effectively in both academic and professional contexts.

Keywords

URL

https://www.hepi.ac.uk/wp-content/uploads/2026/03/HEPI-Report-199-Gen-AI-Survey-2026.pdf

Summary generated by ChatGPT 5.3


Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators


Source

European Commission: Directorate-General for Education, Youth, Sport and Culture, Guidelines on the ethical use of artificial intelligence and data in teaching and learning for educators, Publications Office of the European Union, 2026, https://data.europa.eu/doi/10.2766/7967834

Summary

These European Commission guidelines provide practical and ethical direction for educators using artificial intelligence (AI) and data-driven technologies in teaching and learning. Aimed primarily at school education but broadly applicable across educational contexts, the document emphasises that AI should enhance human-centred, inclusive, and equitable education. It introduces a structured framework to help educators critically assess AI tools, ensuring their use aligns with pedagogical goals, respects learners’ rights, and supports professional autonomy.

The guidelines are grounded in key ethical principles, including human agency, transparency, fairness, privacy, and accountability. They highlight the importance of developing AI literacy among educators and learners, enabling them to understand how AI systems function, what data they use, and what limitations they carry. A strong emphasis is placed on critical engagement—educators are encouraged to question AI outputs, address bias, and avoid overreliance on automated systems. The document also provides a practical self-reflection tool to support educators in evaluating AI tools across dimensions such as reliability, safety, inclusiveness, and educational value.

Key Points

  • AI should support human-centred, inclusive teaching and learning.
  • Educators retain responsibility for decisions made using AI tools.
  • Transparency and explainability are essential for trust in AI systems.
  • AI literacy is critical for both teachers and learners.
  • Data protection and privacy must comply with GDPR principles.
  • Bias and fairness must be actively monitored and mitigated.
  • Educators should critically evaluate AI outputs and limitations.
  • AI tools should align with pedagogical goals, not drive them.
  • A self-reflection framework supports responsible AI adoption.
  • Ethical use of AI requires ongoing professional development and awareness.

Conclusion

The guidelines position AI as a valuable but carefully bounded tool in education. By embedding ethical reflection, critical engagement, and human oversight into everyday practice, educators can harness AI’s benefits while protecting learner rights, educational integrity, and professional judgement.

Keywords

URL

https://op.europa.eu/en/publication-detail/-/publication/f692aa0b-17a7-11f1-8870-01aa75ed71a1

Summary generated by ChatGPT 5.3


OECD Digital Education Outlook 2026


Source

OECD (2026), OECD Digital Education Outlook 2026: Exploring Effective Uses of Generative AI in Education, OECD Publishing, Paris, https://doi.org/10.1787/062a7394-en..

Summary

This flagship OECD report examines how generative artificial intelligence (GenAI) is reshaping education systems, with a strong emphasis on evidence-based uses that enhance learning, teaching, assessment, and system capacity. Drawing on international research, policy analysis, and design experiments, the report moves beyond hype to identify where GenAI adds genuine educational value and where it introduces risks. It highlights GenAI’s potential to support personalised learning, high-quality feedback, teacher productivity, and system-level efficiency, while cautioning against uses that displace cognitive effort or undermine deep learning.

A central theme is the need for hybrid human–AI approaches that preserve teacher autonomy, learner agency, and professional judgement. The report shows that GenAI can be effective when embedded in pedagogically grounded designs, such as intelligent tutoring, formative feedback, and collaborative learning, but harmful when used as a shortcut to answers. It also reviews national policy responses, noting a global shift towards targeted guidance, AI literacy frameworks, and proportionate regulation aligned with ethical principles, transparency, and accountability. The report calls for coordinated strategies that integrate curriculum reform, assessment redesign, professional development, and governance to ensure GenAI strengthens, rather than substitutes, human learning and expertise.

Key Points

  • GenAI can enhance personalised learning and feedback at scale when pedagogically designed.
  • Overreliance on GenAI risks reducing cognitive engagement and deep learning.
  • Hybrid human–AI models are essential to preserve teacher and learner agency.
  • Generative AI should support formative assessment rather than replace judgement.
  • AI literacy is a foundational skill for students, teachers, and leaders.
  • Teacher autonomy and professional expertise must be protected in AI integration.
  • Evidence-informed design is critical to avoid unintended learning harms.
  • National policies increasingly favour guidance over blanket bans.
  • Ethical principles, transparency, and accountability underpin responsible use.
  • Cross-system collaboration strengthens sustainable AI adoption.

Conclusion

The OECD Digital Education Outlook 2026 positions generative AI as a powerful but conditional force in education. Its impact depends not on the technology itself, but on how thoughtfully it is designed, governed, and integrated into learning ecosystems. By prioritising human-centred, evidence-based, and ethically grounded approaches, education systems can harness GenAI to improve quality and equity while safeguarding the core purposes of education.

Keywords

URL

https://www.oecd.org/en/publications/oecd-digital-education-outlook-2026_062a7394-en.html

Summary generated by ChatGPT 5.2


HEA – Generative AI in Higher Education Teaching & Learning: Policy Framework


Source

O’Sullivan, James, Colin Lowry, Ross Woods & Tim Conlon. Generative AI in Higher Education Teaching &
Learning: Policy Framework. Higher Education Authority, 2025. DOI: 10.82110/073e-hg66.

Summary

This policy framework provides a national, values-based approach to guiding the adoption of generative artificial intelligence (GenAI) in teaching and learning across Irish higher education institutions. Rather than prescribing uniform rules, it establishes a shared set of principles to support informed, ethical, and pedagogically sound decision-making. The framework recognises GenAI as a structural change to higher education—particularly to learning design, assessment, and academic integrity—requiring coordinated institutional and sector-level responses rather than ad hoc or individual initiatives.

Focused explicitly on teaching and learning, the framework foregrounds five core principles: academic integrity and transparency; equity and inclusion; critical engagement, human oversight, and AI literacy; privacy and data governance; and sustainable pedagogy. It emphasises that GenAI should neither be uncritically embraced nor categorically prohibited. Instead, institutions are encouraged to adopt proportionate, evidence-informed approaches that preserve human judgement, ensure fairness, protect student data, and align AI use with the public mission of higher education. The document also outlines how these principles can be operationalised through governance, assessment redesign, staff development, and continuous sector learning.

Key Points

  • The framework offers a shared national reference point rather than prescriptive rules.
  • GenAI is treated as a systemic pedagogical challenge, not a temporary disruption.
  • Academic integrity depends on transparency, accountability, and visible authorship.
  • Equity and inclusion must be designed into AI adoption from the outset.
  • Human oversight and critical engagement remain central to learning and assessment.
  • AI literacy is positioned as a core capability for staff and students.
  • Privacy, data protection, and institutional data sovereignty are essential.
  • Assessment practices must evolve beyond reliance on traditional written outputs.
  • Sustainability includes both environmental impact and long-term educational quality.
  • Ongoing monitoring and sector-wide learning are critical to responsible adoption.

Conclusion

The HEA Policy Framework positions generative AI as neither a threat to be resisted nor a solution to be uncritically adopted. By grounding AI integration in shared academic values, ethical governance, and pedagogical purpose, it provides Irish higher education with a coherent foundation for navigating AI-enabled change while safeguarding trust, equity, and educational integrity.

Keywords

URL

https://hea.ie/2025/12/22/hea-publishes-national-policy-framework-on-generative-ai-in-teaching-and-learning/

Summary generated by ChatGPT 5.2


Building the Manifesto: How We Got Here and What Comes Next

By Ken McCarthy
Estimated reading time: 6 minutes
A minimalist illustration featuring the silhouette of a person standing and gazing toward a horizon line formed by soft, glowing digital patterns and abstract light streams. The scene blends naturalistic contemplation with modern technology, symbolizing human agency in shaping the future of AI against a clean, neutral background. Image (and typos) generated by Nano Banana.
Looking ahead: As we navigate the complexities of generative AI in higher education, it is crucial to remember that technology does not dictate our path. Through ethical inquiry and reimagined learning, the horizon is still ours to shape. Image (and typos) generated by Nano Banana.

When Hazel and I started working with GenAI in higher education, we did not set out to write a manifesto. We were simply trying to make sense of a fast-moving landscape. GenAI arrived quickly, finding its way into classrooms and prompting new questions about academic integrity and AI integration long before we had time to work through what it all meant. Students were experimenting earlier than many staff felt prepared for. Policies were still forming.

What eventually became the Manifesto for Generative AI in Higher Education began as our attempt to capture our thoughts. Not a policy, not a fully fledged framework, not a strategy. Just a way to hold the questions, principles, and tensions that kept surfacing. It took shape through notes gathered in margins, comments shared after workshops, ideas exchanged in meetings, and moments in teaching sessions that stayed with us long after they ended. It was never a single project. It gathered itself slowly.

From the start, we wanted it to be a short read that opened the door to big ideas. The sector already has plenty of documents that run to seventy or eighty pages. Many of them are helpful, but they can be difficult to take into a team meeting or a coffee break. We wanted something different. Something that could be read in ten minutes, but still spark thought and conversation. A series of concise statements that felt recognisable to anyone grappling with the challenges and possibilities of GenAI. A document that holds principles without pretending to offer every answer. We took inspiration from the Edinburgh Manifesto for Teaching Online, which reminded us that a series of short, honest statements can travel further than a long policy ever will.

The manifesto is a living reflection. It recognises that we stand at a threshold between what learning has been and what it might become. GenAI brings possibility and uncertainty together, and our role is to respond with imagination and integrity to keep learning a deeply human act .

Three themes shaped the work

As the ideas settled, three themes emerged that helped give structure to the thirty statements.

Rethinking teaching and learning responds to an age of abundance. Information is everywhere. The task of teaching shifts toward helping students interpret, critique, and question rather than collect. Inquiry becomes central. Several statements address this shift, emphasising that GenAI does not replace thinking. It reveals the cost of not thinking. They point toward assessment design that rewards insight over detection and remind us that curiosity drives learning in ways that completion never can .

Responsibility, ethics, and power acknowledges that GenAI is shaped by datasets, values, and omissions. It is not neutral. This theme stresses transparency, ethical leadership, and the continuing importance of academic judgement. It challenges institutions to act with care, not just efficiency. It highlights that prompting is an academic skill, not a technical trick, and that GenAI looks different in every discipline, which means no single approach will fit all contexts.

Imagination, humanity, and the future encourages us to look beyond the disruption of the present moment and ask what we want higher education to become. It holds inclusion as a requirement rather than an aspiration. It names sustainability as a learning outcome. It insists that ethics belong at the beginning of design processes. It ends with the reminder that the horizon is still ours to shape and that the future classroom is a conversation where people and systems learn in dialogue without losing sight of human purpose

How it came together

The writing process was iterative. Some statements arrived whole. Others needed several attempts. We removed the ones that tried to do too much and kept the ones that stayed clear in the mind after a few days. We read them aloud to test the rhythm. The text only settled into its final shape once we noticed the three themes forming naturally.

The feedback from our reviewers, Tom Farrelly and Sue Beckingham, strengthened the final version. Their comments helped us tighten the language and balance the tone. The manifesto may have two named authors, but it is built from many voices.

Early responses from the sector

In the short time since the manifesto was released, the webpage has been visited by more than 750 people from 40 countries. For a document that began as a few lines in a notebook, this has been encouraging. It suggests the concerns and questions we tried to capture are widely shared. More importantly, it signals that there is an appetite for a conversation that is thoughtful, practical, and honest about the pace of change.

This early engagement reinforces something we felt from the start. The manifesto is only the beginning. It is not a destination. It is a point of departure for a shared journey.

Next steps: a book of voices across the sector

To continue that journey, we are developing a book of short essays and chapters that respond to the manifesto. Each contribution will explore a statement within the document. The chapters will be around 1,000 words. They can draw on practice, research, disciplinary experience, student partnership, leadership, policy, or critique. They can support, question, or challenge the manifesto. The aim is not agreement. The aim is insight.

We want to bring together educators, librarians, technologists, academic developers, researchers, students, and professional staff. The only requirement is that contributors have something to say about how GenAI is affecting their work, their discipline, or their students.

An invitation to join us

If you would like to contribute, we would welcome your expression of interest. You do not need specialist expertise in AI. You only need a perspective that might help the sector move forward with clarity and confidence.

Your chapter should reflect on a single statement. It could highlight emerging practice or ask questions that do not yet have answers. It could bring a disciplinary lens or a broader institutional one.

The manifesto was built from shared conversations. The next stage will be shaped by an even wider community. If this work is going to stay alive, it needs many hands.

The horizon is still ours to shape. If you would like to help shape it with us, please submit an expression of interest through the following link: https://forms.gle/fGTR9tkZrK1EeoLH8

Ken McCarthy

Head of Centre for Academic Practice
South East Technological University

As Head of the Centre for Academic Practice at SETU, I lead strategic initiatives to enhance teaching, learning, and assessment across the university. I work collaboratively with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education.
I’m passionate about fostering academic excellence through professional development, curriculum design, and scholarship of teaching and learning. I also support and drive innovation in digital pedagogy and learning spaces.

Keywords