A significant majority of teachers—8 out of 10—are actively re-evaluating their assignment design strategies in response to the rise of AI. This shift reflects a crucial effort to adapt educational methods, ensuring assignments remain relevant, promote critical thinking, and address the capabilities and challenges presented by artificial intelligence. Image (and typos) generated by Nano Banana.
Source
Tes
Summary
A British Council survey of 1,000 UK secondary teachers reveals that 79 per cent have changed how they design assignments because of artificial intelligence. The rapid integration of AI tools into student learning is reshaping assessment practices and communication skills in classrooms. While 59 per cent of teachers are creating assignments that incorporate AI responsibly, 38 per cent are designing tasks to prevent its use entirely. Teachers report declines in writing quality, originality, and vocabulary, as well as shorter attention spans among students. Education leaders, including Amy Lightfoot of the British Council and Sarah Hannafin of the NAHT, call for guidance, training, and proportional expectations to help schools manage AI’s growing influence while maintaining academic integrity and creativity.
Key Points
79 per cent of teachers have altered assignment design due to AI.
59 per cent integrate AI intentionally, while 38 per cent design tasks to exclude it.
Universities have the potential to transform AI from a perceived threat into a powerful educational opportunity, primarily by emphasising and teaching critical thinking skills. This image visually represents critical thinking as the crucial bridge that allows students to navigate the challenges of AI, such as potential plagiarism and shallow learning, and instead harness its power for advanced problem-solving and ethical innovation. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Anitia Lubbe argues that universities should stop treating AI primarily as a threat and instead use it to develop critical thinking. Her research team reviewed recent studies on AI in higher education, finding that generative tools excel at low-level tasks (recall and comprehension) but fail at high-level ones like evaluation and creativity. Traditional assessments, still focused on memorisation, risk encouraging shallow learning. Lubbe proposes redesigning assessments for higher-order skills—asking students to critique, adapt, and evaluate AI outputs. This repositions AI as a learning partner and shifts higher education toward producing self-directed, reflective, and analytical graduates.
Key Points
AI performs well on remembering and understanding tasks but struggles with evaluation and creation.
Current university assessments often reward the same low-level thinking AI already automates.
Teachers should design context-rich, authentic assessments (e.g. debates, portfolios, local case studies).
Students can use AI to practise analysis by critiquing or improving generated content.
Developing AI literacy, assessment literacy, and self-directed learning skills is key to ethical integration.
The emergence of “AI Humaniser” tools marks a new frontier in the battle against AI detection, allowing students to make ChatGPT-generated essays virtually undetectable. This image illustrates a student utilizing such a sophisticated tool, highlighting the technological cat-and-mouse game between AI content creation and detection, and posing significant challenges for academic integrity. Image (and typos) generated by Nano Banana.
Source
Forbes
Summary
The article reveals a growing trend: students are using “AI humaniser” tools to mask signatures of ChatGPT-generated essays so they pass AI detectors. These humanisers tweak syntax, phrasing, rhythm and lexical choices to reduce detection risk. The practice raises serious concerns: it not only undermines efforts to preserve academic integrity, but also escalates the arms race between detection and evasion. Educators warn that when students outsource not only content but also disguise, distinguishing genuine work becomes even harder.
Key Points
AI humaniser apps are designed to rewrite AI output so it appears more human and evade detectors.
The tools adjust stylistic features—such as sentence variety, tone, and lexical choices—to reduce red flags.
Use of these tools amplifies the challenge for educators trying to detect AI misuse.
This escalates a detection-evasion arms race: detectors get better, humanisers evolve.
The phenomenon underlines the urgency of redesigning assessment and emphasising process, not just output.
As the stealth of AI-generated content in written assignments increases, educators are exploring alternative assessment methods. This image highlights a return to oral examinations, where direct interaction can provide a more accurate measure of a student’s understanding and original thought, bypassing the challenges of AI detection software. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Because AI-written texts are relatively easy to present convincingly, detecting AI use in student work is becoming increasingly difficult. The article argues that oral assessments (discussions, interrogations, viva voce) expose a student’s reasoning in ways AI can’t mimic. Voice, hesitation, follow-up questioning and depth of thought are far harder for AI to fake in real time. The authors suggest reintroducing or strengthening oral exams and conversational assessments as a countermeasure to maintain academic integrity and ensure authentic student understanding.
Key Points
AI tools produce polished text, but they fail when asked to defend their reasoning under questioning.
Oral tests can force students to show understanding, not just output.
Real-time dialogue gives instructors more confidence about authenticity than text alone.
Reintroduction of oral assessment may help bridge the integrity gap in AI-era classrooms.
The method isn’t perfect, but it is a practical and historically grounded safeguard.
by Jim O’Mahony, SFHEA – Munster Technological University
Estimated reading time: 5 minutes
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana
I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.
The Seduction of Less Effort
Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.
Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.
Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.
Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?
Guardrails for a New Age
However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.
Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?
Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).
Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?
Rethinking Assessment in a GenAI World
I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.
Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.
To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.
The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.
Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.
From Resistance to Fluency
Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.
I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.
Now, did anyone see where I left the remote control?
Jim O’Mahony
University Professor | Biotechnologist | Teaching & Learning Specialist Munster Technological University
I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.