While ChatGPT has served as a personal tutor for many students over the past year, its pervasive integration into learning also brings forth lingering concerns. This image captures a student’s thoughtful yet wary engagement with an AI tutor, visually juxtaposing its apparent utility with an ominous background figure, representing the unresolved anxieties about AI’s deeper implications for education and personal development. Image (and typos) generated by Nano Banana.
Source
The Harvard Crimson
Summary
Harvard student Sandhya Kumar reflects on a year of using ChatGPT as a learning companion, noting both its benefits and the university’s inconsistent response to generative AI. While ChatGPT has become a common study aid for debugging, essay support, and brainstorming, unclear academic guidelines have led to confusion about acceptable use. Some professors ban AI entirely, while others encourage it, leaving students without a shared framework for responsible integration. Kumar argues that rather than restricting AI, universities should teach AI literacy—helping students understand when and how to use these tools thoughtfully to enhance learning, not replace it.
Key Points
AI tools like ChatGPT are now embedded in student life and coursework.
Harvard’s response to AI use remains fragmented across departments.
Students face unclear ethical and authorship boundaries when using AI.
The author calls for structured AI literacy education rather than bans.
Thoughtful engagement with AI requires defined boundaries and shared guidance.
The emergence of “AI Humaniser” tools marks a new frontier in the battle against AI detection, allowing students to make ChatGPT-generated essays virtually undetectable. This image illustrates a student utilizing such a sophisticated tool, highlighting the technological cat-and-mouse game between AI content creation and detection, and posing significant challenges for academic integrity. Image (and typos) generated by Nano Banana.
Source
Forbes
Summary
The article reveals a growing trend: students are using “AI humaniser” tools to mask signatures of ChatGPT-generated essays so they pass AI detectors. These humanisers tweak syntax, phrasing, rhythm and lexical choices to reduce detection risk. The practice raises serious concerns: it not only undermines efforts to preserve academic integrity, but also escalates the arms race between detection and evasion. Educators warn that when students outsource not only content but also disguise, distinguishing genuine work becomes even harder.
Key Points
AI humaniser apps are designed to rewrite AI output so it appears more human and evade detectors.
The tools adjust stylistic features—such as sentence variety, tone, and lexical choices—to reduce red flags.
Use of these tools amplifies the challenge for educators trying to detect AI misuse.
This escalates a detection-evasion arms race: detectors get better, humanisers evolve.
The phenomenon underlines the urgency of redesigning assessment and emphasising process, not just output.
The academic world is currently experiencing a bifurcated response to artificial intelligence: while some faculty are enthusiastically innovating with AI to transform learning, others are deliberately avoiding its integration, advocating for traditional methods. This image vividly illustrates these contrasting approaches within university classrooms, highlighting the ongoing debate and diverse strategies faculty are employing regarding AI. Image (and typos) generated by Nano Banana.
Source
Cornell Chronicle
Summary
Cornell faculty are experimenting with hybrid approaches to AI: some integrate generative AI into coursework, others push back by returning to in-person, pencil-and-paper assessments. In nutrition and disease classes, AI is used to simulate patient case studies, generating unpredictable errors that prompt students to think critically. In parallel, some professors now include short “job interview” chats or oral questions to verify understanding. A campus survey found 70% of students use GenAI weekly or more, but only 44% of faculty do. Cornell is responding via workshops, a GenAI education working group, and guidelines to preserve academic integrity while embracing AI’s pedagogical potentials.
Key Points
AI is used to generate case studies, simulate patients, debate AI arguments, and help faculty draft content.
Some faculty moved back to paper exams, in-class assessments, or short oral checks (“job interviews”) to guard learning.
A campus survey showed 70% of students use GenAI weekly, vs. 44% of faculty.
Cornell’s GenAI working group develops policies, workshops, and academic integrity guidelines around AI use.
The approach is not binary acceptance or rejection, but navigating where AI can support without eroding students’ reasoning and agency.
As the stealth of AI-generated content in written assignments increases, educators are exploring alternative assessment methods. This image highlights a return to oral examinations, where direct interaction can provide a more accurate measure of a student’s understanding and original thought, bypassing the challenges of AI detection software. Image (and typos) generated by Nano Banana.
Source
The Conversation
Summary
Because AI-written texts are relatively easy to present convincingly, detecting AI use in student work is becoming increasingly difficult. The article argues that oral assessments (discussions, interrogations, viva voce) expose a student’s reasoning in ways AI can’t mimic. Voice, hesitation, follow-up questioning and depth of thought are far harder for AI to fake in real time. The authors suggest reintroducing or strengthening oral exams and conversational assessments as a countermeasure to maintain academic integrity and ensure authentic student understanding.
Key Points
AI tools produce polished text, but they fail when asked to defend their reasoning under questioning.
Oral tests can force students to show understanding, not just output.
Real-time dialogue gives instructors more confidence about authenticity than text alone.
Reintroduction of oral assessment may help bridge the integrity gap in AI-era classrooms.
The method isn’t perfect, but it is a practical and historically grounded safeguard.
by Jim O’Mahony, SFHEA – Munster Technological University
Estimated reading time: 5 minutes
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana
I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.
The Seduction of Less Effort
Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.
Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.
Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.
Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?
Guardrails for a New Age
However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.
Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?
Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).
Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?
Rethinking Assessment in a GenAI World
I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.
Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.
To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.
The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.
Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.
From Resistance to Fluency
Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.
I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.
Now, did anyone see where I left the remote control?
Jim O’Mahony
University Professor | Biotechnologist | Teaching & Learning Specialist Munster Technological University
I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.