Student Generative AI Survey 2026


Source

Higher Education Policy Institute (HEPI), Report 199, 2026

Summary

This HEPI report presents findings from a large-scale survey of UK higher education students on their use of generative artificial intelligence (GenAI), building on earlier surveys from 2024 and 2025. It shows that GenAI use is now widespread and normalised across the student population, with most students using AI tools regularly for tasks such as explaining concepts, summarising readings, generating ideas, and supporting writing. The report highlights a shift from experimental use to embedded study practice, with students increasingly viewing GenAI as a standard academic tool rather than an optional extra.

However, the findings also reveal a complex landscape of uneven skills, uncertainty, and institutional inconsistency. While many students report benefits in efficiency and understanding, concerns persist around overreliance, accuracy, and fairness. The report notes that guidance from institutions remains variable, with students often unclear about acceptable use in assessments. Importantly, the data suggests a growing expectation that universities should actively teach students how to use GenAI effectively and ethically, rather than simply regulate or restrict it. The report underscores the need for clearer policies, improved AI literacy, and assessment redesign that reflects real-world practices.

Key Points

  • The majority of students now use GenAI regularly in their studies.
  • Common uses include explaining concepts, summarising, and drafting work.
  • GenAI is becoming embedded as a standard academic tool.
  • Students report gains in efficiency, productivity, and understanding.
  • Concerns remain about accuracy, bias, and overreliance.
  • Institutional guidance on GenAI use is inconsistent or unclear.
  • Many students are uncertain about acceptable use in assessments.
  • There is strong demand for formal AI literacy education.
  • Assessment practices are not yet aligned with widespread AI use.
  • Equity issues arise from unequal access to tools and skills.

Conclusion

The HEPI Student Generative AI Survey 2026 highlights a decisive shift: generative AI is no longer emerging but embedded in student learning. The challenge for higher education is to move from reactive policy-making to proactive educational design—equipping students with the skills, clarity, and critical awareness needed to use AI responsibly and effectively in both academic and professional contexts.

Keywords

URL

https://www.hepi.ac.uk/wp-content/uploads/2026/03/HEPI-Report-199-Gen-AI-Survey-2026.pdf

Summary generated by ChatGPT 5.3


Ethical Guidelines on the Use of Artificial Intelligence (AI) and Data in Teaching and Learning for Educators


Source

European Commission: Directorate-General for Education, Youth, Sport and Culture, Guidelines on the ethical use of artificial intelligence and data in teaching and learning for educators, Publications Office of the European Union, 2026, https://data.europa.eu/doi/10.2766/7967834

Summary

These European Commission guidelines provide practical and ethical direction for educators using artificial intelligence (AI) and data-driven technologies in teaching and learning. Aimed primarily at school education but broadly applicable across educational contexts, the document emphasises that AI should enhance human-centred, inclusive, and equitable education. It introduces a structured framework to help educators critically assess AI tools, ensuring their use aligns with pedagogical goals, respects learners’ rights, and supports professional autonomy.

The guidelines are grounded in key ethical principles, including human agency, transparency, fairness, privacy, and accountability. They highlight the importance of developing AI literacy among educators and learners, enabling them to understand how AI systems function, what data they use, and what limitations they carry. A strong emphasis is placed on critical engagement—educators are encouraged to question AI outputs, address bias, and avoid overreliance on automated systems. The document also provides a practical self-reflection tool to support educators in evaluating AI tools across dimensions such as reliability, safety, inclusiveness, and educational value.

Key Points

  • AI should support human-centred, inclusive teaching and learning.
  • Educators retain responsibility for decisions made using AI tools.
  • Transparency and explainability are essential for trust in AI systems.
  • AI literacy is critical for both teachers and learners.
  • Data protection and privacy must comply with GDPR principles.
  • Bias and fairness must be actively monitored and mitigated.
  • Educators should critically evaluate AI outputs and limitations.
  • AI tools should align with pedagogical goals, not drive them.
  • A self-reflection framework supports responsible AI adoption.
  • Ethical use of AI requires ongoing professional development and awareness.

Conclusion

The guidelines position AI as a valuable but carefully bounded tool in education. By embedding ethical reflection, critical engagement, and human oversight into everyday practice, educators can harness AI’s benefits while protecting learner rights, educational integrity, and professional judgement.

Keywords

URL

https://op.europa.eu/en/publication-detail/-/publication/f692aa0b-17a7-11f1-8870-01aa75ed71a1

Summary generated by ChatGPT 5.3