Latest Posts

Generative AI in Higher Education Teaching and Learning: Sectoral Perspectives


Source

Higher Education Authority

Summary

This report, commissioned by the Higher Education Authority (HEA), captures sector-wide perspectives on the impact of generative AI across Irish higher education. Through ten thematic focus groups and a leadership summit, it gathered insights from academic staff, students, support personnel, and leaders. The findings show that AI is already reshaping teaching, learning, assessment, and governance, but institutional responses remain fragmented and uneven. Participants emphasised the urgent need for national coordination, values-led policies, and structured capacity-building for both staff and students.

Key cross-cutting concerns included threats to academic integrity, the fragility of current assessment practices, risks of skill erosion, and unequal access. At the same time, stakeholders recognised opportunities for AI to enhance teaching, personalise learning, support inclusion, and free staff time for higher-value educational work. A consistent theme was that AI should not be treated merely as a technical disruption but as a pedagogical and ethical challenge that requires re-examining educational purpose.

Key Points

  • Sectoral responses to AI are fragmented; coordinated national guidance is urgently needed.
  • Generative AI challenges core values of authorship, originality, and academic integrity.
  • Assessment redesign is necessary—moving towards authentic, process-focused approaches.
  • Risks include skill erosion in writing, reasoning, and information literacy if AI is overused.
  • AI literacy for staff and students must go beyond tool use to include ethics and critical thinking.
  • Ethical use of AI requires shared principles, not just compliance or detection measures.
  • Inclusion is not automatic: without deliberate design, AI risks deepening inequality.
  • Staff feel underprepared and need professional development and institutional support.
  • Infrastructure challenges extend beyond tools to governance, procurement, and policy.
  • Leadership must shape educational vision, not just manage risk or compliance.

Conclusion

Generative AI is already embedded in higher education, raising urgent questions of purpose, integrity, and equity. The consultation shows both enthusiasm and unease, but above all a readiness to engage. The report concludes that a coordinated, values-led, and inclusive approach—balancing innovation with responsibility—will be essential to ensure AI strengthens, rather than undermines, Ireland’s higher education mission.

Keywords

URL

https://hea.ie/2025/09/17/generative-ai-in-higher-education-teaching-and-learning-sectoral-perspectives/

Summary generated by ChatGPT 5


How are UK students really using AI?


A five-panel mosaic shows diverse UK students interacting with AI. Top row: a Black female student drinks coffee while using a laptop with glowing AI interfaces in a library; a male student studies a book with AI visuals; a male student in a lab coat uses a holographic AI screen with a UK university building in the background. Bottom row: a female student writes while looking at AI data; another male student explains a holographic AI model to a small group of students. The overall image depicts widespread AI integration in student life. Generated by Nano Banana.
From research assistance to code generation and personalised study aids, AI is subtly and overtly reshaping how UK students approach their academic work. This mosaic illustrates the multifaceted ways technology is being integrated into learning, posing questions about its true impact and future trajectory in higher education. Image generated by Nano Banana.

Source

Times Higher Education

Summary

About two-thirds (66 %) of UK undergraduate students report using AI for work or study, with one in three doing so at least weekly. ChatGPT is by far the most popular tool. Universities vary in how they regulate or guide AI use: some discourage it, others provide boundaries but little training, few actively teach ethical AI practice. Most students use AI for understanding concepts, summarising content, or identifying sources, while a smaller but significant share admits to using AI to create or partially create graded work. Many believe AI has helped their grades, though others see little or negative impact. Clearer guidance and teaching around ethical, effective AI use are needed.

Key Points

  • 66% of UK undergrads use AI for degree-related work; 33% use it weekly.
  • The most common applications: explaining difficult concepts (81%), summarising sources (69%), finding sources, and improving existing work.
  • About 14% of AI-using students confess to using AI in graded work in ways that could be cheating (creating or heavily editing), which is ~9% of all students.
  • 47% of AI-using students report frequently encountering “hallucinations” (incorrect or false information) from AI tools.
  • Universities’ policies are mixed: some actively discourage use; many simply warn; only a minority proactively teach students how to use AI well and ethically.

Keywords

URL

https://yougov.co.uk/society/articles/52855-how-are-uk-students-really-using-ai

Summary generated by ChatGPT 5


Professors experiment as AI becomes part of student life


In a modern university lecture hall, three professors (two female, one male) stand at a glowing, interactive holographic table, actively demonstrating or discussing AI concepts. Students are seated at desks, some using laptops with glowing AI interfaces, and one student wears a VR headset. A large holographic screen in the background displays 'AI Integration Lab: Fall 2024'. The scene depicts educators experimenting with AI in a learning environment. Generated by Nano Banana.
As AI increasingly integrates into daily student life, professors are actively experimenting with new pedagogical approaches and tools to harness its potential. This image captures a dynamic classroom setting where educators are at the forefront of exploring how AI can enrich learning, adapt teaching methods, and prepare students for an AI-driven future. Image generated by Nano Banana.

Source

The Globe and Mail

Summary

AI has shifted from novelty to necessity in Canadian higher education, with almost 60% of students now using it. Professors are experimenting with different approaches: some resist, others regulate, and many actively integrate AI into assessments. Concerns remain about diminished critical thinking, but educators like those at the University of Toronto and University of Guelph argue that ignoring AI leaves graduates unprepared. Strategies include teaching students to refine AI-generated drafts, redesigning assignments to require human input, and adopting oral assessments. The consensus is that policies alone cannot keep pace; practical, ethical, and reflective engagement is essential for preparing students to use AI responsibly.

Key Points

  • Nearly 60% of Canadian students use AI for coursework, rising globally to over 90%.
  • Professors face a choice: resist, regulate, or embrace AI; ignoring it is seen as untenable.
  • Innovative teaching methods include refining AI drafts, training prompt skills, and oral assessments.
  • Concerns persist about weakening critical thinking and creativity.
  • Preparing students for AI-rich workplaces requires embedding literacy, ethics, and adaptability.

Keywords

URL

https://www.theglobeandmail.com/business/article-professors-experiment-as-ai-becomes-part-of-student-life/

Summary generated by ChatGPT 5


Are students really that keen on generative AI?


In a collaborative workspace, a male student holds up a tablet displaying generative AI concepts, including a robotic arm, while a question mark hovers above. Another male student gestures enthusiastically, while two female students at laptops show skeptical or thoughtful expressions. A whiteboard covered with notes and diagrams is in the background. The scene depicts students with mixed reactions to generative AI. Generated by Nano Banana.
As generative AI tools become more prevalent, the student response is far from monolithic. This image captures the varied reactions—from eager adoption to thoughtful skepticism—as students grapple with the benefits and implications of integrating these powerful technologies into their academic and creative processes. Are they truly keen, or cautiously optimistic? Image generated by Nano Banana.

Source

Wonkhe

Summary

A YouGov survey of 1,027 students shows strong disapproval of using generative AI for assessed work: 93% say creating work using AI is unacceptable, 82% extend that to using parts of it. While many students have used AI study tools (summarising, finding sources, etc.), nearly half report encountering false or “hallucinated” content from those tools. Most believe their university’s stance on AI is too lenient rather than overly strict, and many expect that academic staff could detect misuse. There are benefits reported—some students think their grades and learning outcomes improved—but overall confidence in AI’s reliability and appropriateness remains low.

Key Points

  • 93% of students believe work created via generative AI for assessment is unacceptable; 82% say even partial use is unacceptable.
  • Around 47% of students who use AI study tools see hallucinations or false information in the AI’s output.
  • 66% believe it likely their university would detect AI-generated work used improperly.
  • Many students report learning and grades that are “slightly more or about the same” when using AI tools.
  • Opinion among students: many are not particularly motivated to use AI for cheating; more often they use it in low-stakes or supportive ways.

Keywords

URL

https://wonkhe.com/wonk-corner/are-students-really-that-keen-on-generative-ai/

Summary generated by ChatGPT 5


How to use ChatGPT at university without cheating: ‘Now it’s more like a study partner’


Three university students (two male, one female) are seated at a table with laptops and books, smiling and engaged in discussion. Behind them, a large transparent screen displays a glowing blue humanoid AI figure pointing to various academic data and charts. The setting is a modern library, conveying a collaborative study environment where AI acts as a helpful, non-cheating resource. Generated by Nano Banana.
Moving beyond fears of academic dishonesty, many students are now leveraging ChatGPT as an ethical ‘study partner’ to enhance their learning experience at university. This image illustrates a collaborative approach where AI supports understanding and exploration, rather than providing shortcuts, thereby fostering a new era of academic assistance. Image generated by Nano Banana.

Source

The Guardian

Summary

Many students now treat ChatGPT less like a cheating shortcut and more like a study partner: for grammar checks, revision, practice questions, and organising notes. Usage jumped from 66% to 92% in a year. Universities are clarifying rules: AI can support study but not generate assignment content. Educators stress AI literacy, awareness of risks (hallucinations, fake references), and critical thinking to ensure AI complements rather than replaces learning.

Key Points

  • Student AI use rose from ~66% to ~92% in a year; viewed more as a partner than a cheat tool.
  • Valid uses: organising notes, summarising, and generating practice questions.
  • Risks: overreliance, hallucinations, using AI to write assignments still banned.
  • Some universities track AI usage or require logs; policies clearer.
  • Message: AI should be supplemental, not a substitute; build literacy and critical skills.

Keywords

URL

https://www.theguardian.com/education/2025/sep/14/how-to-use-chatgpt-at-university-without-cheating-now-its-more-like-a-study-partner

Summary generated by ChatGPT 5