Teaching, Learning, Assessment and GenAI: Moving from Reaction to Intentional Practice

By Dr Hazel Farrell & Ken McCarthy, South East Technological University & GenAI:N3
Estimated reading time: 7 minutes
A digital illustration depicting the intersection of technology and higher education. On the left, a glowing, translucent human brain composed of neural networks rises from an open, illuminated book. On the right, a group of educators and professionals sit in a circle at a glowing round table, engaged in a collaborative discussion. The background features subtle academic symbols like a graduation cap and a chalkboard, all set in a futuristic, tech-enabled environment. Image (and typos) generated by Nano Banana.
Moving from reaction to intentional practice: Exploring the collaborative future of Generative AI in higher education through human-led dialogue and pedagogical reflection. Image (and typos) generated by Nano Banana.

Generative AI has become part of higher education with remarkable speed.

In a short period of time, it has entered classrooms, assessment design, academic writing, feedback processes, and professional workflows. For many educators, its arrival felt sudden and difficult to make sense of, leaving little space to pause and consider what this shift means for learning, teaching, and academic practice.

Initial responses across the sector have often focused on risk, regulation, and control. These concerns are understandable. Yet they only tell part of the story. Alongside uncertainty and anxiety, there is also curiosity, experimentation, and a growing recognition that GenAI raises questions that are fundamentally pedagogical rather than purely technical.

On 21 January, we are delighted to host #LTHEchat to explore these questions together and to move the conversation from reaction towards more intentional, reflective practice.

The discussion will be grounded in the Manifesto for Generative AI in Higher Education, and informed by the wider work of GenAI:N3, a national initiative in Ireland supporting collaborative engagement with generative AI across higher education.

GenAI:N3: A Collaborative Project for the Sector

GenAI:N3 is a national network that was established in Ireland as part of the N-TUTORR programme, to support technological higher education institutions as they responded to the rapid emergence of generative AI. Rather than focusing on tools or technical solutions, the project centres on people, practice, and shared learning.

At its core, GenAI:N3 aims to build institutional and sectoral capacity by creating spaces where educators, professional staff, and leaders can explore GenAI together. Its work is grounded in collaboration across institutions and disciplines, recognising that no single university or role has all the answers.

The project focuses on several interconnected areas:

  • Supporting communities of practice where staff can share experiences, challenges, and emerging approaches
  • Encouraging critical and reflective engagement with GenAI in teaching, learning, assessment, and professional practice
  • Exploring the ethical, social, and institutional implications of GenAI, including questions of power, inclusion, sustainability, and academic judgement
  • Developing shared resources, events, and conversations that help the sector learn collectively rather than in isolation

GenAI:N3 is not about accelerating adoption for its own sake. It is about helping institutions and individuals make informed, values-led decisions that are aligned with the purposes of higher education.

The Manifesto as a Shared Thinking Space

The Manifesto for Generative AI in Higher Education emerged from this collaborative context. It did not begin as a formal deliverable or a policy exercise. Instead, it took shape gradually through workshops, conversations, reflections, and recurring questions raised by staff and students across the sector.

What became clear was a need for a shared language. Not a framework that closed down debate, but a set of statements that could hold complexity, uncertainty, and difference.

The Manifesto brings together 30 short statements organised across three themes:

  • Rethinking teaching and learning
  • Responsibility, ethics, and power
  • Imagination, humanity, and the future

It is intentionally concise and deliberately open. It does not offer instructions or compliance rules. Instead, it invites educators and institutions to pause, reflect, and ask what kind of learning we are designing for in a world where generative tools are readily available.

One of its central ideas is that GenAI does not replace thinking. Rather, it reveals the cost of not thinking. In doing so, it challenges us to look beyond surface solutions and to engage more deeply with questions of purpose, judgement, and educational values.

Why These Conversations Matter Now

Much of the early discourse around GenAI has centred on assessment integrity and detection. While these issues matter, they risk narrowing the conversation too quickly.

GenAI does not operate uniformly across disciplines, contexts, or learning designs. What is productive in one setting may be inappropriate in another. Students experience this inconsistency acutely, particularly when institutional policies feel disconnected from everyday teaching practice.

The work of GenAI:N3, and the thinking captured in the Manifesto, keeps this complexity in view. It foregrounds ideas such as transparency as a foundation for trust, academic judgement as something that can be supported but not automated, and ethical leadership as an institutional responsibility rather than an individual burden.

These ideas play out in very practical ways, in curriculum design, in assessment briefs, in conversations with students, and in decisions about which tools are used and why.

Why #LTHEchat?

#LTHEchat has long been a space for thoughtful, practice-led discussion across higher education. That makes it an ideal forum to explore generative AI not simply as a technology, but as a catalyst for deeper pedagogical and institutional reflection.

This chat is not about promoting a single position or reaching neat conclusions. Instead, it is an opportunity to surface experiences, tensions, and emerging practices from across the sector.

The questions we will pose are designed to open up dialogue around issues such as abundance, transparency, disciplinary difference, and what it means to keep learning human in a GenAI-rich environment.

An Invitation to Join the Conversation

Whether you are actively experimenting with generative AI, approaching it with caution, or still forming your views, your perspective is welcome.

Bring examples from your own context. Bring uncertainties and unfinished thinking. The Manifesto itself is open to use, adapt, and challenge, and GenAI:N3 continues to evolve through the contributions of those engaging with its work.

As the Manifesto suggests, the future classroom is a conversation. On 21 January, we hope you will join that conversation with us through #LTHEchat.

Links

LTHE Chat Website: https://lthechat.com/

LTHE Chat Bluesky: https://bsky.app/profile/lthechat.bsky.social

Dr Hazel Farrell

GenAI Academic Lead
SETU

Hazel Farrell has been immersed in the AI narrative since 2023 both through practice-based research and the development of guidelines, frameworks, tools, and training to support educators and learners throughout the HE sector. She led the national N-TUTORR GenAI:N3 project which was included in the EDUCAUSE 2025 Horizon Report as an exemplar of good practice. She is the SETU Academic Lead for GenAI and Chair of the university’s GenAI Steering Committee. The practical application of GenAI provides a strong foundation for her research, with student engagement initiatives for creative disciplines at the forefront of her work. Hazel recently won DEC24 Digital Educator Award for her GenAI contributions to the HE sector. She has presented extensively on a variety of GenAI related topics and has several publications in this space.

Ken McCarthy

Head of Centre for Academic Practice
SETU

Ken McCarthy is the Head of the Centre for Academic Practice at SETU, Ken leads strategic initiatives to enhance teaching, learning, and assessment across the university. He works with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education. He is the current vice-president of ILTA (Irish Learning Technology Association) and was previously the university lead for the N-TUTORR programme. He has a lifelong interest in technology and education and combines this in his professional role. He has written and presented on technology enhanced learning in general and in GenAI in particular over the past number of years.

Keywords


A History Professor Says AI Did Not Break College; It Exposed How Broken It Already Was


A dramatic, conceptual image showing a crumbling, old-fashioned column (representing "Traditional College Structure") with cracks widening as digital light and AI code seep into the fissures, emphasizing that AI revealed existing weaknesses rather than caused the damage. Image (and typos) generated by Nano Banana.
Unmasking the flaws: A history professor’s perspective suggesting that AI merely shone a light on the structural vulnerabilities and existing problems within higher education, rather than being the sole source of disruption. Image (and typos) generated by Nano Banana.

Source

Business Insider

Summary

This article features a U.S. history professor who argues that generative AI did not cause the crisis currently unfolding in higher education but instead revealed long-standing structural flaws. According to the professor, AI has exposed weaknesses in assessment design, unclear expectations placed on students and unsustainable workloads carried by academic staff. The sudden visibility of AI-generated essays and assignments has forced institutions to confront the limitations of traditional assessment models that rely heavily on polished written output rather than demonstrated cognitive processes. The professor notes that AI has unintentionally highlighted inequities in student preparation, inconsistencies in grading norms and the mismatch between institutional rhetoric and actual resourcing. Rather than attempting to suppress AI, the article argues that higher education should treat this moment as an opportunity to redesign curricula, diversify assessments and rethink the broader purpose of university education. The piece positions AI as a catalyst for long-overdue reform, emphasising that genuine improvement will require institutions to invest in pedagogical redesign, staff support and clearer communication around learning outcomes.

Key Points

  • AI highlighted systemic weaknesses already present in higher education
  • Exposed flaws in assessment design and grading expectations
  • Revealed pressures on overworked teaching staff
  • Suggests AI could drive constructive reform
  • Encourages rethinking pedagogy and institutional priorities

Keywords

URL

https://www.businessinsider.com/ai-didnt-break-college-it-exposed-broken-system-professor-2025-11

Summary generated by ChatGPT 5.1


How AI Adoption May Erode Key Skills US Students Need in an Automated World


A highly conceptual visual of a digital circuit board with key areas representing skills like "Critical Thinking," "Problem Solving," and "Originality" fading and becoming obscured by an overwhelming cloud of generic, high-speed AI data. Image (and typos) generated by Nano Banana.
The automation paradox: Experts warn that while AI drives efficiency, its widespread adoption in education may inadvertently erode the crucial cognitive and creative skills US students need to thrive in a future dominated by technology. Image (and typos) generated by Nano Banana.

Source

Times of India (Education International Desk)

Summary

This article explores concerns that widespread adoption of AI tools in education may undermine essential skills that students require for long-term success in an increasingly automated world. Educators and analysts interviewed argue that easy access to generative AI for writing, problem solving and research may weaken students’ capacity for critical thinking, creativity and independent judgement. They note that while AI can accelerate tasks, it may also reduce opportunities for deep learning and cognitive struggle, both of which are crucial for intellectual development. The article raises concerns that students who rely heavily on AI may experience diminished confidence in producing original work and solving complex problems without technological support. Experts recommend curriculum renewal that blends responsible AI literacy with explicit instruction in foundational skills, ensuring that students can use AI effectively without sacrificing their broader intellectual growth. The discussion reflects a recurring theme in the global AI-in-education debate: the need to preserve human expertise and cognitive resilience in an era of pervasive automation. The article calls for educators, policymakers and institutions to strike a balance between embracing AI and safeguarding human capabilities.

Key Points

  • Widespread AI use may weaken foundational cognitive skills
  • Risks include reduced independent thinking and reduced confidence
  • Educators call for curriculum redesign with balanced AI integration
  • Highlights need for responsible AI literacy
  • Addresses long-term workforce preparation concerns

Keywords

URL

https://timesofindia.indiatimes.com/education/news/how-ai-adoption-may-erode-key-skills-us-students-need-in-an-automated-world/articleshow/125672541.cms

Summary generated by ChatGPT 5.1


English Professors Take Individual Approaches to Deterring AI Use


A triptych showing three different English professors employing distinct methods to deter AI use. The first panel shows a professor lecturing on critical thinking. The second shows a professor providing personalized feedback on a digital screen. The third shows a professor leading a discussion with creative prompts. Image (and typos) generated by Nano Banana.
Diverse strategies in action: English professors are developing unique and personalised methods to encourage original thought and deter the misuse of AI in their classrooms. Image (and typos) generated by Nano Banana.

Source

Yale Daily News

Summary

Without a unified departmental policy, Yale University’s English professors are independently addressing the challenge of generative AI in student writing. While all interviewed faculty agree that AI undermines critical thinking and originality, their responses vary from outright bans to guided experimentation. Professors Stefanie Markovits and David Bromwich warn that AI shortcuts obstruct the process of learning to think and write independently, while Rasheed Tazudeen enforces a no-tech classroom to preserve student engagement. Playwriting professor Deborah Margolin insists that AI cannot replicate authentic human voice and creativity. Across approaches, faculty emphasise trust, creativity, and the irreplaceable role of struggle in developing genuine thought.

Key Points

  • Yale English Department lacks a central AI policy, favouring academic freedom.
  • Faculty agree AI use hinders original thinking and creative voice.
  • Some, like Tazudeen, impose no-tech classrooms to deter reliance on AI.
  • Others allow limited exploration under clear guidelines and reflection.
  • Consensus: authentic learning requires human engagement and intellectual struggle.

Keywords

URL

https://yaledailynews.com/blog/2025/10/29/english-professors-take-individual-approaches-to-deterring-ai-use/

Summary generated by ChatGPT 5


Is Increasing Use of AI Damaging Students’ Learning Ability?


A split image contrasting two groups of students in a classroom. On the left, a blue-lit side represents "COGNITIVE DECAY" with students passively looking at laptops receiving "EASY ANSWERS." On the right, an orange-lit side represents "CRITICAL THINKING" and "CREATIVITY" with students actively collaborating and working. Image (and typos) generated by Nano Banana.
A critical question posed: Does the growing reliance on AI lead to cognitive decay, or can it be harnessed to foster critical thinking and creativity in students? Image (and typos) generated by Nano Banana.

Source

Radio New Zealand (RNZ) – Nine to Noon

Summary

University of Auckland professor Alex Sims examines whether the growing integration of artificial intelligence in classrooms and lecture halls enhances or impedes student learning. Drawing on findings from an MIT neuroscience study and an Oxford University report, Sims highlights both the cognitive effects of AI use and students’ own accounts of its impact on motivation and understanding. The research suggests that while AI tools can aid efficiency, overreliance may disrupt the brain processes central to deep learning and independent reasoning. The discussion raises questions about how to balance technological innovation with the preservation of critical thinking and sustained attention.

Key Points

  • AI use in education is expanding rapidly across levels and disciplines.
  • MIT research explores how AI affects neural activity linked to learning.
  • Oxford report includes students’ perceptions of AI’s influence on study habits.
  • Benefits include efficiency; risks include reduced cognitive engagement.
  • Experts urge educators to maintain a balance between AI support and active learning.

Keywords

URL

https://www.rnz.co.nz/national/programmes/ninetonoon/audio/2019010577/is-increasing-use-of-ai-damaging-students-learning-ability

Summary generated by ChatGPT 5