Latest Posts

AI in the classroom is hard to detect — time to bring back oral tests


In a modern classroom or meeting room, students are seated around a table, some with laptops. Two individuals are engaged in an oral discussion, facing each other. Behind them, a large screen displays lines of code that appear to be pixelating and disappearing, symbolizing the difficulty in detecting AI. Image (and typos) generated by Nano Banana.
As the stealth of AI-generated content in written assignments increases, educators are exploring alternative assessment methods. This image highlights a return to oral examinations, where direct interaction can provide a more accurate measure of a student’s understanding and original thought, bypassing the challenges of AI detection software. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Because AI-written texts are relatively easy to present convincingly, detecting AI use in student work is becoming increasingly difficult. The article argues that oral assessments (discussions, interrogations, viva voce) expose a student’s reasoning in ways AI can’t mimic. Voice, hesitation, follow-up questioning and depth of thought are far harder for AI to fake in real time. The authors suggest reintroducing or strengthening oral exams and conversational assessments as a countermeasure to maintain academic integrity and ensure authentic student understanding.

Key Points

  • AI tools produce polished text, but they fail when asked to defend their reasoning under questioning.
  • Oral tests can force students to show understanding, not just output.
  • Real-time dialogue gives instructors more confidence about authenticity than text alone.
  • Reintroduction of oral assessment may help bridge the integrity gap in AI-era classrooms.
  • The method isn’t perfect, but it is a practical and historically grounded safeguard.

Keywords

URL

https://theconversation.com/ai-in-the-classroom-is-hard-to-detect-time-to-bring-back-oral-tests-265955

Summary generated by ChatGPT 5


How to test GenAI’s impact on learning


 In a futuristic classroom or lab, a large holographic screen prominently displays a glowing human brain at its center, surrounded by various metrics like "GENAI IMPACT ASSESSMENT," "CREATIVITY INDEX," and "CRITICAL THINKING SCORE." Several individuals, some wearing VR/AR headsets, are engaged with individual holographic desks showing similar data, actively analyzing GenAI's effects on learning. Image (and typos) generated by Nano Banana.
As generative AI becomes more prevalent, understanding its true impact on student learning is paramount. This image envisions a sophisticated approach to assessing GenAI’s influence, utilising advanced metrics and holographic displays to quantify and analyse its effects on creativity, critical thinking, and overall educational outcomes. Image (and typos) generated by Nano Banana.

Source

Times Higher Education

Summary

Thibault Schrepel argues against speculation and for empirical classroom experiments to measure how generative AI truly affects student learning. He outlines simple, scalable experimental designs—e.g. groups forbidden from AI, groups using it without guidance, groups trained in prompting and critique—to compare outcomes in recall, writing quality, and reasoning. Schrepel also suggests activities like having students build AI research assistants, comparing human and AI summaries, and using AI as a Socratic tutor. He emphasises that AI won’t uniformly help or hurt; its impact depends on how it’s used, taught, and assessed.

Key Points

  • Use controlled classroom experiments with different levels of AI access/training to reveal real effects.
  • Recall or rote learning may not change much; AI’s effects show more in reasoning, argumentation and writing quality.
  • Activities like comparing AI vs human summaries or having AI play the role of interlocutor can highlight strengths and limitations.
  • Prompting, critique, and metacognitive reflection are central to converting AI from crutch to tool.
  • Banning AI outright is less useful than enabling pedagogical experimentation and shared insight across faculty.

Keywords

URL

https://www.timeshighereducation.com/campus/how-test-genais-impact-learning

Summary generated by ChatGPT 5


2025 Horizon Action Plan: Building Skills and Literacy for Teaching with GenAI


Source

Jenay Robert, EDUCAUSE (2025)

Summary

This collection of essays explores how artificial intelligence—particularly generative AI (GenAI)—is reshaping the university sector across teaching, research, and administration. Contributors, including Dame Wendy Hall, Vinton Cerf, Rose Luckin, and others, argue that AI represents a profound structural shift rather than a passing technological wave. The report emphasises that universities must respond strategically, ethically, and holistically: developing AI literacy among staff and students, redesigning assessment, and embedding responsible innovation into governance and institutional strategy.

AI is portrayed as both a disruptive and creative force. It automates administrative processes, accelerates research, and transforms strategy-making, while simultaneously challenging ideas of authorship, assessment, and academic integrity. Luckin and others call for universities to foster uniquely human capacities—critical thinking, creativity, emotional intelligence, and metacognition—so that AI augments rather than replaces human intellect. Across the essays, there is strong consensus that AI literacy, ethical governance, and institutional agility are vital if universities are to remain credible and relevant in the AI era.

Key Points

  • GenAI is reshaping all aspects of higher education teaching and learning.
  • AI literacy must be built into curricula, staff training, and institutional culture.
  • Faculty should use GenAI to enhance creativity and connection, not replace teaching.
  • Clear, flexible policies are needed for responsible and ethical AI use.
  • Institutions must prioritise equity, inclusion, and closing digital divides.
  • Ongoing professional development in AI is essential for staff and administrators.
  • Collaboration across institutions and with industry accelerates responsible adoption.
  • Assessment and pedagogy must evolve to reflect AI’s role in learning.
  • GenAI governance should balance innovation with accountability and transparency.
  • Shared toolkits and global practice networks can scale learning and implementation.

Conclusion

The Action Plan positions GenAI as both a challenge and a catalyst for renewal in higher education. Institutions that foster literacy, ethics, and innovation will not only adapt but thrive. Teaching with AI is framed as a collective, values-led enterprise—one that keeps human connection, creativity, and critical thinking at the centre of the learning experience.

Keywords

URL

https://library.educause.edu/resources/2025/9/2025-educause-horizon-action-plan-building-skills-and-literacy-for-teaching-with-genai

Summary generated by ChatGPT 5


Black Eyed Peas’ will.i.am to teach AI class at ASU


In a futuristic, dark room with glowing blue and red neon lights, a large holographic screen displays an online AI class titled "THE AGENTIC SELF." The main panel shows a charismatic male professor speaking, surrounded by various AI-related data, neural networks, and a stylized human head representing an AI. Below, a grid of diverse student participants is visible in a virtual meeting. The Arizona State University (ASU) logo is also displayed. Image (and typos) generated by Nano Banana.
This image envisions an engaging online AI class at Arizona State University, titled “The Agentic Self,” exploring the intricacies of autonomous AI. It showcases a dynamic virtual classroom where students connect from various locations, delving into cutting-edge concepts of AI’s self-governing capabilities and its implications for the future. Image (and typos) generated by Nano Banana.

Source

Phoenix Business Journal

Summary

Arizona State University announced that Black Eyed Peas performer and entrepreneur will.i.am will join the faculty as a professor of practice to teach a course on artificial intelligence. Starting spring 2026, he will lead “The Agentic Self”, a 15-week class exploring how AI can serve as a creative and educational partner. The class will run through ASU’s GAME School and connect to will.i.am’s FYI.AI platform. University officials emphasise the collaboration as part of ASU’s mission to innovate teaching and help students gain fluency in emerging technologies.

Key Points

  • will.i.am joins ASU as professor of practice to teach AI.
  • Course title: “The Agentic Self”, scheduled for spring 2026.
  • Students will explore AI as tool, collaborator, and creative partner.
  • Class is hosted by ASU’s GAME School and linked to FYI.AI platform.
  • Move underscores ASU’s strategy of blending tech, industry expertise, and higher education innovation.

Keywords

URL

https://www.bizjournals.com/phoenix/news/2025/09/29/black-eyed-peas-performer-to-teach-asu-class-on-ai.html

Summary generated by ChatGPT 5


We must set the rules for AI use in scientific writing and peer review


A group of scientists and academics in lab coats are seated around a conference table in a modern meeting room with a city skyline visible through a large window. Above them, a glowing holographic screen displays "GOVERNING AI IN SCIENTIFIC PUBLICATION," with two main columns: "Scientific Writing" and "Peer Review," each listing specific regulations and ethical considerations for AI use, such as authorship, plagiarism checks, and bias detection. Image (and typos) generated by Nano Banana.
As AI’s role in academic research rapidly expands, establishing clear guidelines for its use in scientific writing and peer review has become an urgent imperative. This image depicts a panel of experts discussing these crucial regulations, emphasizing the need to set ethical frameworks to maintain integrity, transparency, and fairness in the scientific publication process. Image (and typos) generated by Nano Banana.

Source

Times Higher Education

Summary

George Chalhoub argues that as AI becomes more entrenched in research and publication, the academic community urgently needs clear, enforceable guidelines for its use in scientific writing and peer review. He cites evidence of undeclared AI involvement in manuscripts and reviews, hidden prompts, and inflated submission volume. To maintain credibility, journals must require authors and reviewers to disclose AI use, forbid AI as a co-author, and ensure human oversight. Chalhoub frames AI as a tool—not a decision-maker—and insists that accountability, transparency, and common standards must guard against erosion of trust in the scientific record.

Key Points

  • Significant prevalence of AI content: e.g. 13.5 % of 2024 abstracts bore signs of LLM use, with some fields reaching 40 %.
  • Up to ~17 % of peer review sentences may already be generated by AI, per studies of review corpora.
  • Some authors embed hidden prompts (e.g. white-text instructions) to influence AI-powered reviewing tools.
  • Core requirements: disclosure of AI use (tools, versions, roles), human responsibility for verification, no listing of AI as author.
  • Journals should adopt policies involving audits, sanctions for misuse, and shared frameworks via organisations like COPE and STM.

Keywords

URL

https://www.timeshighereducation.com/opinion/we-must-set-rules-ai-use-scientific-writing-and-peer-review

Summary generated by ChatGPT 5