While AI offers efficiency in creating lesson plans, a new report suggests that these automated curricula often fall short in fostering student inspiration and promoting essential critical thinking skills. This visual highlights the gap between AI-generated structures and the nuanced needs of engaging pedagogy. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Torrey Trust reports that AI-generated lesson plans, though convenient, fail to promote higher-order thinking and inclusivity in the classroom. In a study analysing 311 AI-created civics lesson plans from ChatGPT, Gemini, and Copilot, 90 per cent of activities were found to encourage only basic recall and comprehension rather than critical or creative thinking. Using frameworks such as Bloom’s taxonomy and Banks’ multicultural integration model, the researchers found that only 6 per cent of plans included diverse perspectives or representation of marginalised groups. The study warns that while AI tools can save teachers time, they risk reproducing formulaic, one-size-fits-all instruction. Teachers are encouraged to use AI for inspiration—not automation—and to embed context, creativity, and cultural depth into their own designs.
Key Points
311 AI-generated civics lesson plans were analysed using Bloom’s taxonomy and Banks’ model.
90 per cent of activities promoted only lower-order thinking skills such as memorisation and recall.
Only 6 per cent included multicultural or diverse perspectives.
AI tools produce generic, context-free lesson plans not tailored to real classrooms.
Educators should use AI as a support tool, prompting it with detailed, critical instructions.
A critical new analysis reveals that current AI training protocols are designed to avoid the use of three words—”why,” “how,” and “imagine”—which are fundamental to human learning, critical thinking, and creativity. This raises significant questions about the depth of understanding and innovation possible with AI. Image (and typos) generated by Nano Banana.
Source
Education Week
Summary
Sam Wineburg and Nadav Ziv argue that artificial intelligence, by design, avoids the phrase “I don’t know,” a trait that undermines the essence of learning. Drawing on OpenAI’s research, they note that chatbots are penalised for expressing uncertainty and rewarded for confident—but often incorrect—answers. This, they contend, clashes with educational goals that value questioning, evidence-weighing, and intellectual humility. The authors caution educators to slow the rush to integrate AI into classrooms without teaching critical evaluation. Instead of treating AI as a source of truth, students must learn to interrogate it—asking for sources, considering evidence, and recognising ambiguity. True learning, they write, depends on curiosity and the courage to admit what one does not know.
Key Points
Chatbots are trained to eliminate uncertainty, prioritising fluency over accuracy.
Students and adults often equate confident answers with credible information.
AI risks promoting surface-level understanding and discouraging critical inquiry.
Educators should model scepticism, teaching students to source and question AI outputs.
Learning thrives on doubt and reflection—qualities AI currently suppresses.
Universities are evolving their approach to artificial intelligence, moving beyond simply detecting AI-generated content to actively and ethically embedding AI as a tool for enhanced learning and development. This image visually outlines this critical shift, showcasing how institutions are now focusing on integrating AI within a robust ethical framework to foster personalised learning, collaborative environments, and innovative educational practices. Image (and typos) generated by Nano Banana.
Source
HEPI
Summary
Rather than focusing on detection and policing, this blog argues universities should shift toward ethically embedding AI as a pedagogical tool. Based on research commissioned by Studiosity, evidence shows that when AI is used responsibly, it correlates with improved outcomes and retention—especially for non-traditional students. The blog presents a “conduit” metaphor: AI is like an overhead projector—helpful, but not replacing core learning. A panel at the Universities UK Annual Conference proposed values and guardrails (integrity, equity, transparency, adaptability) to guide institutional policy. The piece calls for sandboxing new tools, centring student support and human judgment in AI adoption.
Key Points
The narrative needs to move from detection and restriction to development and support of AI in learning.
Independent research found a positive link between guided AI use and student attainment/retention, especially for non-traditional learners.
AI should be framed as a conduit (like projectors) rather than a replacement of teaching/learning.
A values-based framework is needed: academic integrity, equity, transparency, responsibility, resilience, empowerment, adaptability.
Universities should use “sandboxing” (controlled testing) and robust governance rather than blanket bans.
by Jim O’Mahony, SFHEA – Munster Technological University
Estimated reading time: 5 minutes
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana
I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.
The Seduction of Less Effort
Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.
Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.
Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.
Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?
Guardrails for a New Age
However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.
Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?
Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).
Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?
Rethinking Assessment in a GenAI World
I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.
Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.
To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.
The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.
Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.
From Resistance to Fluency
Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.
I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.
Now, did anyone see where I left the remote control?
Jim O’Mahony
University Professor | Biotechnologist | Teaching & Learning Specialist Munster Technological University
I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.
While AI teaching tools are certainly not a ‘panacea’ for all educational challenges, they possess immense potential as a ‘force multiplier,’ significantly enhancing learning experiences. This image visually contrasts AI’s limitations with its power to augment human capabilities, underscoring a nuanced approach to its integration in the classroom. Image (and typos) generated by Nano Banana.
Source
The New Indian Express
Summary
The author argues that while AI teaching tools are gaining attention, their value shows only when paired with thoughtful pedagogy, not when used in isolation. Meta-analyses and classroom studies suggest AI tools (adaptive quizzes, personalised feedback) can enhance student performance and time management—but only in learning environments where human feedback, active engagement, and scaffolding remain central. AI should assist, not replace, the relational, ethical, and mentoring roles of teachers. Without integrating AI into active learning, its benefits are diluted; it risks becoming mere decoration.
Key Points
AI tools deliver gains when embedded into active, interactive teaching—not used as standalone replacements.
Meta-studies show stronger outcomes when technology is personalised and integrated rather than simply overlaid.
Students report improved time management and performance when AI offers real-time feedback and adaptive quizzing.
Pedagogical design (feedback loops, scaffolding, mentor oversight) remains essential; AI alone doesn’t do that work.
AI cannot replicate human qualities such as creativity, ethics, judgement, and emotional understanding.