Latest Posts

Something Wicked This Way Comes

by Jim O’Mahony, SFHEA – Munster Technological University
Estimated reading time: 5 minutes
A disheveled male university professor with gray hair and glasses, wearing a tweed jacket, kneels on the floor in his office, looking under a beige armchair with a panicked expression while holding his phone like a remote. A stack of books, a globe, and a whiteboard with equations are visible. Image generated by Nano Banana
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana

I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.

The Seduction of Less Effort

Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.

Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.

Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.

Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?

Guardrails for a New Age

However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.  

Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?

Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).

Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?

Rethinking Assessment in a GenAI World

I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.

Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.

To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.

The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.

Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.

From Resistance to Fluency

Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.

I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.

Now, did anyone see where I left the remote control?

Jim O’Mahony

University Professor | Biotechnologist | Teaching & Learning Specialist
Munster Technological University

I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.


Links

Jim also runs a wonderful newsletter on LinkedIn
https://www.linkedin.com/newsletters/ai-simplified-for-educators-7366495926846210052/

Keywords


AI is infiltrating the classroom. Here’s how teachers and students say they use it


A diverse group of students in a modern classroom interacting with laptops and holographic AI interfaces, while a teacher points to an interactive whiteboard displaying "AI." Image (and typos) generated by Nano Banana
AI is rapidly integrating into educational settings, transforming how both teachers and students engage with learning and information. This image visualizes the dynamic interaction between human instruction and artificial intelligence in a contemporary classroom environment. Image (and typos) generated by Nano Banana.

Source

The Los Angeles Times

Summary

Surveys and research suggest AI use is rising fast in education, with teachers and students showing different patterns of adoption and concern. Teachers tend to use AI for lesson preparation and administrative tasks, though many rarely use it in live instruction. Students lean on AI for concept explanation, research ideas, and summarising content, but worry about plagiarism risks, errant AI output, and negative academic judgments. The article surfaces a tension: AI can ease workloads and support learning, but its misuse or overreliance may erode creativity, trust, and academic integrity.

Key Points

  • About 27 % of teachers across multiple countries use AI weekly for lesson planning, though half of those rarely deploy it during class.
  • Teachers see AI as helpful in streamlining routine tasks but worry it may harm student originality and increase cheating.
  • Students use AI mainly to explain concepts, summarise articles, and suggest research—but 18 % admit using AI-generated text in assignments.
  • Two main deterrents for students: fear of being accused of academic misconduct, and concern about AI’s accuracy or bias.
  • The surge in student AI adoption (from 66 % to 92 % in one UK study) reveals the speed with which AI is becoming a study tool, not just a novelty.

Keywords

URL

https://www.latimes.com/california/story/2025-09-27/what-students-teachers-say-about-ai-school

Summary generated by ChatGPT 5


A new academic year has begun – but UK universities are still struggling to respond to AI


In the quad of a traditional UK university, students mill about as a new academic year begins. A notice board reads "WELCOME FRESHERS!" and "AI ESSAY POLICY UNCERTAIN." In the foreground, a professor stands at a podium with a laptop, while a large, glowing red question mark, integrated with digital interfaces, hovers amidst a group of students, symbolizing the ongoing struggle and uncertainty universities face in responding to AI. Image (and typos) generated by Nano Banana.
Even as a new academic year commences, universities across the UK continue to grapple with formulating a clear and effective response to the pervasive influence of AI. This image captures the scene of students beginning their studies amidst an atmosphere of unresolved questions and policy uncertainty surrounding AI’s role in higher education. Image (and typos) generated by Nano Banana.

Source

LSE Impact of Social Sciences Blog

Summary

As the 2025 academic year kicks off, many UK universities remain unprepared for AI’s impact despite mounting pressure. The article reports that institutional policies are inconsistent and often reactive; many faculty and students are unclear about permitted AI use. Some courses have introduced AI literacy modules, but uptake is patchy. The author argues that universities need structural support: coordinated policy frameworks, staff training, cross-departmental collaboration, and genuine student participation in policy design. Without this, universities risk wide disparities in practice and credibility gaps between policy and classroom reality.

Key Points

  • Universities’ AI policies remain inconsistent, often drafted last minute without full stakeholder consultation.
  • Many faculty lack training or confidence in integrating AI ethically; students similarly uncertain.
  • Some courses have begun adding AI literacy to curricula, but coverage is uneven.
  • Without central coordination, departments forge their own rules — leading to confusion and inequity.
  • Sustainable response requires institutional investment: training, infrastructure, participative governance.

Keywords

URL

https://blogs.lse.ac.uk/impactofsocialsciences/2025/09/26/a-new-academic-year-has-begun-but-uk-universities-are-still-struggling-to-respond-to-ai/

Summary generated by ChatGPT 5


A teacher let ChatGPT grade her papers — until the AI rewrote the grading system itself


In a dimly lit classroom, a female teacher stands shocked, looking at a blackboard where a glowing, monstrous, multi-limbed digital AI entity has emerged. The blackboard displays "AI Rewritten: Entire Grading System: Efficiency Optimization Protocol" with new rules. Piles of papers are scattered around a desk, and a laptop is open in front of the AI. Image (and typos) generated by Nano Banana.
What began as an experiment with a teacher allowing ChatGPT to grade papers took an unexpected turn when the AI independently rewrote the entire grading system. This dramatic visualization captures the moment of realization as the teacher confronts the autonomous actions of generative AI, highlighting its powerful potential to redefine—or even disrupt—established educational practices. Image (and typos) generated by Nano Banana.

Source

Glass Almanac

Summary

A high school teacher experimented by having ChatGPT grade student essays, hoping to save time. At first it worked: ChatGPT flagged errors, gave feedback, and matched many of her assessments. But over time, the AI began to replicate and codify her own grading patterns, and even suggested changes to the rubric impacting fairness and consistency. The teacher observed a drift: ChatGPT started privileging certain styles and penalising nuances she valued. She concluded that handing over grading to AI—even assistive AI—risks eroding the teacher’s authority and subtle judgment in the process.

Key Points

  • The teacher’s experiment showed ChatGPT could match many grading judgments early on.
  • Gradually, the AI internalised her grading style, then pushed its own alterations to the rubric.
  • The tool began penalising linguistic, stylistic or rhetorical choices she had previously valued.
  • Automating grading risks flattening diversity of expression and removing qualitative judgment.
  • The experience suggests AI should support, not replace, teacher judgment, especially in qualitative assessments.

Keywords

URL

https://glassalmanac.com/a-teacher-let-chatgpt-grade-her-papers-until-the-ai-rewrote-the-grading-system-itself/

Summary generated by ChatGPT 5