AI Is Robbing Students of Critical Thinking, Professor Says


In a grand, traditional university library, a menacing, cloaked digital entity with glowing red eyes representing AI looms over a group of students seated at a long table, all intensely focused on laptops with glowing blue faces. Thought bubbles emanate from the AI, offering to "GENERATE ESSAY," "SUMMARIZE," and "GIVE ANSWER." In the background, a visibly frustrated professor gestures emphatically, observing the scene. Image (and typos) generated by Nano Banana.
A prominent professor warns that the widespread use of AI is actively depriving students of opportunities to develop critical thinking skills. This image dramatically visualizes AI as a looming, pervasive force in the academic lives of students, offering quick solutions that may bypass the deeper cognitive processes essential for genuine intellectual growth and independent thought. Image (and typos) generated by Nano Banana.

Source

Business Insider

Summary

Kimberley Hardcastle, assistant professor of business and marketing at Northumbria University, warns that generative AI is not just facilitating plagiarism—it’s encouraging students to outsource their thinking. Based on Anthropic data, about 39 % of student-AI interactions involved creating or polishing academic texts and another 33 % requested direct solutions. Hardcastle argues this is shifting the locus of intellectual authority toward Big Tech, making it harder for students to engage with ambiguity, weigh evidence, or claim ownership of ideas. She urges institutions to focus less on policing misuse, and more on pedagogies that preserve critical thinking and epistemic agency.

Key Points

  • 39.3 % of student-AI chats were about composing or revising assignments; 33.5 % requested direct solutions.
  • AI output often is accepted uncritically because it presents polished, authoritative language.
  • The danger: students come to trust AI explanations over their own reasoned judgement.
  • Hardcastle views this as part of a larger shift: tech companies increasingly influence how “knowledge” is framed and delivered.
  • She suggests the response should emphasise pedagogy: design modes of teaching that foreground critical thinking over output policing.

Keywords

URL

https://www.businessinsider.com/ai-chatgpt-robbing-students-of-critical-thinking-professor-says-2025-9

Summary generated by ChatGPT 5


Students Who Lack Academic Confidence More Likely to Use AI


In a modern university library setting, a young female student with a concerned expression is intently focused on her laptop. A glowing holographic interface floats above her keyboard, displaying "ESSAY ASSIST," "RESEARCH BOT," and "CONFIDENCE BOOST!" with an encouraging smiley face. In the background, other students are also working on laptops. Image (and typos) generated by Nano Banana.
Research suggests a correlation between a lack of academic confidence in students and an increased likelihood of turning to AI tools for assistance. This image depicts a student utilising an AI interface offering “confidence boost” and “essay assist,” illustrating how AI can become a crutch for those feeling insecure about their abilities in the academic environment. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

A survey by Inside Higher Ed and Generation Lab finds that 85 % of students claim they’ve used generative AI for coursework in the past year. Among the habits observed, students with lower self-perceived academic competence or low confidence are more likely to lean on AI tools, especially when unsure or reluctant to ask peers or instructors for help. The study distinguishes between instrumental help-seeking (clarification, explanations) and executive help-seeking (using AI to complete work). Students who trust AI more are also more likely to use it. The authors argue that universities need clearer AI policies and stronger support structures so that students don’t feel forced into overreliance.

Key Points

  • 85 % of surveyed students reported using generative AI for coursework in the past year.
  • Students with lower academic confidence or discomfort asking peers tend to rely more on AI.
  • AI use splits into two modes: instrumental (asking questions, clarifying) vs executive (using the AI to generate or complete work).
  • Trust in AI correlates with higher usage, even controlling for other variables.
  • Many students call for clear, standardised institutional policies on AI use to reduce ambiguity.

Keywords

URL

https://www.insidehighered.com/news/student-success/academic-life/2025/09/30/students-who-lack-academic-confidence-more-likely-use

Summary generated by ChatGPT 5


OpenAI Releases List of Work Tasks ChatGPT Can Already Replace


In a sleek, modern open-plan office, a group of professionals stands around a glowing holographic display that projects "OpenAI: ChatGPT's Replaceable Work Tasks." A list of tasks like "Drafting Emails," "Writing Basic Reports," and "Data Entry & Cleaning" is visible, with checkmarks or X's next to them, indicating tasks ChatGPT can handle. Some individuals are holding tablets, observing the display, while others are in the background. Image (and typos) generated by Nano Banana.
OpenAI has released a significant list detailing numerous work tasks that its advanced AI, ChatGPT, is already capable of performing or even replacing. This image illustrates professionals observing these capabilities, highlighting the transformative impact AI is having on the modern workforce and prompting discussions about job roles and efficiency. Image (and typos) generated by Nano Banana.

Source

Futurism

Summary

OpenAI published a new evaluation, GDPval, assessing how well its models perform “economically valuable” tasks across 44 occupations. The results suggest that current frontier models are approaching the quality of expert work in many domains. Examples include legal briefs, marketing analyses, technical documentation, medical image assessments, and sales brochures. While AI might not replace entire jobs, it can outperform humans in well-specified tasks. OpenAI emphasises that models currently handle repetitive, clearly defined tasks better than nuanced judgment work. GPT-5-High matched or surpassed expert deliverables in ~40% of evaluated cases. Critics warn of hallucinations, overconfidence, and the risk of overestimating AI’s real-world reach.

Key Points

  • GDPval tests 44 occupations on real-world tasks to benchmark AI against experts.
  • GPT-5-High achieved parity or better than expert work in ~40% of tasks.
  • Tasks include analytics, document drafting, medical imaging, and sales collateral.
  • AI models perform best on repetitive, narrow tasks; struggle on ambiguous, poorly defined ones.
  • OpenAI positions this not as job replacement but augmentation—yet raises deeper questions about labour, oversight, and trust.

Keywords

URL

https://futurism.com/future-society/openai-work-tasks-chatgpt-can-already-replace

Summary generated by ChatGPT 5


AI in the classroom is hard to detect — time to bring back oral tests


In a modern classroom or meeting room, students are seated around a table, some with laptops. Two individuals are engaged in an oral discussion, facing each other. Behind them, a large screen displays lines of code that appear to be pixelating and disappearing, symbolizing the difficulty in detecting AI. Image (and typos) generated by Nano Banana.
As the stealth of AI-generated content in written assignments increases, educators are exploring alternative assessment methods. This image highlights a return to oral examinations, where direct interaction can provide a more accurate measure of a student’s understanding and original thought, bypassing the challenges of AI detection software. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Because AI-written texts are relatively easy to present convincingly, detecting AI use in student work is becoming increasingly difficult. The article argues that oral assessments (discussions, interrogations, viva voce) expose a student’s reasoning in ways AI can’t mimic. Voice, hesitation, follow-up questioning and depth of thought are far harder for AI to fake in real time. The authors suggest reintroducing or strengthening oral exams and conversational assessments as a countermeasure to maintain academic integrity and ensure authentic student understanding.

Key Points

  • AI tools produce polished text, but they fail when asked to defend their reasoning under questioning.
  • Oral tests can force students to show understanding, not just output.
  • Real-time dialogue gives instructors more confidence about authenticity than text alone.
  • Reintroduction of oral assessment may help bridge the integrity gap in AI-era classrooms.
  • The method isn’t perfect, but it is a practical and historically grounded safeguard.

Keywords

URL

https://theconversation.com/ai-in-the-classroom-is-hard-to-detect-time-to-bring-back-oral-tests-265955

Summary generated by ChatGPT 5


How to test GenAI’s impact on learning


 In a futuristic classroom or lab, a large holographic screen prominently displays a glowing human brain at its center, surrounded by various metrics like "GENAI IMPACT ASSESSMENT," "CREATIVITY INDEX," and "CRITICAL THINKING SCORE." Several individuals, some wearing VR/AR headsets, are engaged with individual holographic desks showing similar data, actively analyzing GenAI's effects on learning. Image (and typos) generated by Nano Banana.
As generative AI becomes more prevalent, understanding its true impact on student learning is paramount. This image envisions a sophisticated approach to assessing GenAI’s influence, utilising advanced metrics and holographic displays to quantify and analyse its effects on creativity, critical thinking, and overall educational outcomes. Image (and typos) generated by Nano Banana.

Source

Times Higher Education

Summary

Thibault Schrepel argues against speculation and for empirical classroom experiments to measure how generative AI truly affects student learning. He outlines simple, scalable experimental designs—e.g. groups forbidden from AI, groups using it without guidance, groups trained in prompting and critique—to compare outcomes in recall, writing quality, and reasoning. Schrepel also suggests activities like having students build AI research assistants, comparing human and AI summaries, and using AI as a Socratic tutor. He emphasises that AI won’t uniformly help or hurt; its impact depends on how it’s used, taught, and assessed.

Key Points

  • Use controlled classroom experiments with different levels of AI access/training to reveal real effects.
  • Recall or rote learning may not change much; AI’s effects show more in reasoning, argumentation and writing quality.
  • Activities like comparing AI vs human summaries or having AI play the role of interlocutor can highlight strengths and limitations.
  • Prompting, critique, and metacognitive reflection are central to converting AI from crutch to tool.
  • Banning AI outright is less useful than enabling pedagogical experimentation and shared insight across faculty.

Keywords

URL

https://www.timeshighereducation.com/campus/how-test-genais-impact-learning

Summary generated by ChatGPT 5