Generative AI in Higher Education Teaching and Learning: Sectoral Perspectives


Source

Higher Education Authority

Summary

This report, commissioned by the Higher Education Authority (HEA), captures sector-wide perspectives on the impact of generative AI across Irish higher education. Through ten thematic focus groups and a leadership summit, it gathered insights from academic staff, students, support personnel, and leaders. The findings show that AI is already reshaping teaching, learning, assessment, and governance, but institutional responses remain fragmented and uneven. Participants emphasised the urgent need for national coordination, values-led policies, and structured capacity-building for both staff and students.

Key cross-cutting concerns included threats to academic integrity, the fragility of current assessment practices, risks of skill erosion, and unequal access. At the same time, stakeholders recognised opportunities for AI to enhance teaching, personalise learning, support inclusion, and free staff time for higher-value educational work. A consistent theme was that AI should not be treated merely as a technical disruption but as a pedagogical and ethical challenge that requires re-examining educational purpose.

Key Points

  • Sectoral responses to AI are fragmented; coordinated national guidance is urgently needed.
  • Generative AI challenges core values of authorship, originality, and academic integrity.
  • Assessment redesign is necessary—moving towards authentic, process-focused approaches.
  • Risks include skill erosion in writing, reasoning, and information literacy if AI is overused.
  • AI literacy for staff and students must go beyond tool use to include ethics and critical thinking.
  • Ethical use of AI requires shared principles, not just compliance or detection measures.
  • Inclusion is not automatic: without deliberate design, AI risks deepening inequality.
  • Staff feel underprepared and need professional development and institutional support.
  • Infrastructure challenges extend beyond tools to governance, procurement, and policy.
  • Leadership must shape educational vision, not just manage risk or compliance.

Conclusion

Generative AI is already embedded in higher education, raising urgent questions of purpose, integrity, and equity. The consultation shows both enthusiasm and unease, but above all a readiness to engage. The report concludes that a coordinated, values-led, and inclusive approach—balancing innovation with responsibility—will be essential to ensure AI strengthens, rather than undermines, Ireland’s higher education mission.

Keywords

URL

https://hea.ie/2025/09/17/generative-ai-in-higher-education-teaching-and-learning-sectoral-perspectives/

Summary generated by ChatGPT 5


Are students really that keen on generative AI?


In a collaborative workspace, a male student holds up a tablet displaying generative AI concepts, including a robotic arm, while a question mark hovers above. Another male student gestures enthusiastically, while two female students at laptops show skeptical or thoughtful expressions. A whiteboard covered with notes and diagrams is in the background. The scene depicts students with mixed reactions to generative AI. Generated by Nano Banana.
As generative AI tools become more prevalent, the student response is far from monolithic. This image captures the varied reactions—from eager adoption to thoughtful skepticism—as students grapple with the benefits and implications of integrating these powerful technologies into their academic and creative processes. Are they truly keen, or cautiously optimistic? Image generated by Nano Banana.

Source

Wonkhe

Summary

A YouGov survey of 1,027 students shows strong disapproval of using generative AI for assessed work: 93% say creating work using AI is unacceptable, 82% extend that to using parts of it. While many students have used AI study tools (summarising, finding sources, etc.), nearly half report encountering false or “hallucinated” content from those tools. Most believe their university’s stance on AI is too lenient rather than overly strict, and many expect that academic staff could detect misuse. There are benefits reported—some students think their grades and learning outcomes improved—but overall confidence in AI’s reliability and appropriateness remains low.

Key Points

  • 93% of students believe work created via generative AI for assessment is unacceptable; 82% say even partial use is unacceptable.
  • Around 47% of students who use AI study tools see hallucinations or false information in the AI’s output.
  • 66% believe it likely their university would detect AI-generated work used improperly.
  • Many students report learning and grades that are “slightly more or about the same” when using AI tools.
  • Opinion among students: many are not particularly motivated to use AI for cheating; more often they use it in low-stakes or supportive ways.

Keywords

URL

https://wonkhe.com/wonk-corner/are-students-really-that-keen-on-generative-ai/

Summary generated by ChatGPT 5


AI Defeats the Purpose of a Humanities Education


In a grand, traditional university library, a massive, monolithic black AI construct with glowing blue circuit patterns and red text displaying "HUMANITIES INTEGRITY: 0%" is violently crashing into a long wooden conference table, scattering books and ancient busts. A group of somber-faced academics in robes stands around, observing the destruction with concern. Image (and typos) generated by Nano Banana.
This image powerfully visualises the concern that AI’s capabilities might fundamentally undermine the core purpose of a humanities education. The crashing digital monolith symbolises AI’s disruptive force, threatening to erode the value of human critical thought, interpretation, and creativity that humanities disciplines aim to cultivate. Image (and typos) generated by Nano Banana.

Source

The Harvard Crimson

Summary

The authors argue that generative AI tools fundamentally conflict with what a humanities education aims to do: teach students how to think, read, write, and argue as humans do, rather than delegating those tasks to machines. They claim AI can polish writing but misses the point of learning through struggle, critique, and revision. The piece calls for banning generative AI in humanities courses, saying that even mild uses still sidestep essential intellectual growth. Imperfect, difficult writing is better for learning than polished AI‑assisted work.

Key Points

  • AI polishing undermines the learning process of struggle and critique.
  • Imperfect essays without AI are more educational.
  • Inconsistent policies across faculty cause confusion.
  • Humanities should preserve authentic human expression and critical thinking.
  • Banning AI helps preserve rigor and humanistic values.

Keywords

URL

https://www.thecrimson.com/article/2025/9/9/chiocco-farrell-harvard-ai/

Summary generated by ChatGPT 5


‘It’s a monster’: How generative AI is forcing university professors to rethink learning


In a dimly lit, traditional university lecture hall, a monstrous, multi-limbed, glowing blue digital creature with glowing red eyes looms large behind a professor at a podium. Around tables in the foreground, other professors in academic robes express concern and confusion, some pointing at the creature, while a blackboard in the background reads "RETHINK CURRICULUM" and "HUMAN PROMPT." Image (and typos) generated by Nano Banana.
Described by some as a “monster,” generative AI is fundamentally challenging established educational paradigms. This image dramatically illustrates the immense, even intimidating, presence of AI in academia, compelling university professors to urgently rethink and innovate their approaches to learning and curriculum design. Image (and typos) generated by Nano Banana.

Source

The Irish Times

Summary

Professors in Ireland are rethinking what learning and assessment mean as generative AI becomes widespread. With students using tools like ChatGPT for brainstorming, summarisation, and essay writing, faculty are concerned not just about plagiarism but about diminished reflection, reading, and originality. Responses include replacing take‑home essays with in‑class/open‑book work, designing reflective and relational assignments, and rebuilding community in learning. Faculty warn education is becoming transactional, focused on grades over growth, and AI use may hollow out critical thinking unless institutions redesign pedagogy and policies.

Key Points

  • Widespread AI use by students undermines traditional essays and originality.
  • Professors replace take‑home essays with in‑class/open‑book assessments.
  • Assignments now stress reflection, relational thinking, vulnerability — areas AI struggles with.
  • Students under pressure turn to AI instrumentally, prioritising grades over growth.
  • Institutions face resource challenges in redesigning assessments and policies.

Keywords

URL

https://www.irishtimes.com/ireland/education/2025/09/09/its-a-monster-how-generative-ai-is-forcing-university-professors-to-rethink-learning/

Summary generated by ChatGPT 5


AI is redefining university research: here’s how


A group of five diverse researchers in a futuristic lab are gathered around a glowing, circular interactive table. Bright neon lines of blue, green, and orange emanate from the table, connecting to large wall-mounted screens displaying complex data, molecular structures, and charts related to various scientific fields. A large window overlooks a modern city skyline, symbolizing advanced research in an urban university setting. Generated by Nano Banana.
AI is fundamentally reshaping the landscape of university research, offering unprecedented capabilities for data analysis, simulation, and discovery. This image envisions a collaborative, high-tech research environment where AI tools empower scholars to explore complex problems across disciplines, accelerating breakthroughs and pushing the boundaries of knowledge. Image generated by Nano Banana.

Source

Tech Radar

Summary

AI is accelerating many parts of academic research: mining large datasets, speeding hypothesis generation, automating literature reviews, and helping with data visualization. While these tools alleviate time‑heavy, repetitive tasks, there are rising concerns about over‑reliance: loss of critical thinking, ethical issues (authorship, bias), accuracy, and what AI means for researcher agency. Academia must adopt clear policies, build researcher familiarity with AI, and ensure integrity and oversight so that AI complements rather than replaces human scholarship.

Key Points

  • AI tools automate tedious research tasks (data mining, lit reviews, visualization).
  • Hypothesis generation at scale enables new discoveries.
  • Risks: loss of critical thinking, plagiarism, errors, ethical/authorship issues.
  • Helps non-native speakers, assists with referencing and peer review, but needs oversight.
  • Responsible use requires frameworks, training, and ethical guidelines.

Keywords

URL

https://www.techradar.com/ai-platforms-assistants/ai-is-redefining-university-research-heres-how

Summary generated by ChatGPT 5