Frequent AI chatbot use associated with lower grades among computer science students


A young male computer science student sits in a modern lab filled with other students working on computers, looking directly at the viewer with a concerned expression. Above him, a glowing red holographic display shows 'AI CHATBOT DEPENDENCE - LOWER GRADES', featuring a downward-trending graph and an 'F' grade on a document. The scene visually links over-reliance on AI chatbots with declining academic performance. Generated by Nano Banana.
New research indicates a concerning trend: frequent reliance on AI chatbots by computer science students is often associated with lower academic grades. This image captures the visual representation of this finding, suggesting that while AI tools offer convenience, over-dependence may hinder the development of critical problem-solving skills essential for deep learning and success. Image (and typos) generated by Nano Banana.

Source

PsyPost

Summary

A study of 231 first-year computer science students in Estonia finds that more frequent use of AI chatbots correlates with lower performance on programming tests, final exams, and overall course scores. While nearly 80% of students had used an AI assistant at least once, heavier use was associated with lower grades. Interestingly, students’ perceptions of how helpful the tools were did not predict their academic outcome. The data suggests a complex relationship: students struggling may rely more on chatbots, or overreliance might undermine their development of core coding skills.

Key Points

  • 80%+ of students in a programming course reported using AI chatbots; usage patterns varied significantly.
  • Those who used chatbots more often earned lower scores on tests, exams, and overall course standings.
  • Most common uses: debugging code, getting explanations; less common: full code generation.
  • Students cited speed, availability, clear explanations as benefits—but also reported hallucinations and overly advanced or irrelevant responses.
  • The study couldn’t disentangle causation: lower ability might drive more AI use, or AI use might hinder deeper learning.

Keywords

URL

https://www.psypost.org/frequent-ai-chatbot-use-associated-with-lower-grades-among-computer-science-students/

Summary generated by ChatGPT 5


In the Age of AI, Are Universities Doomed?


A group of academic figures and students are seated around a grand, traditional university library table, looking towards a glowing, holographic projection of a human brain with interconnected digital pathways, overlaid with various data points and "AI" labels. The brain appears against a backdrop of a city skyline at dusk. Image (and typos) generated by Nano Banana.
The rapid advancement of Artificial Intelligence prompts a critical question: What is the future of higher education? This image explores the intersection of classic academic settings and cutting-edge AI, contemplating whether universities are on the brink of obsolescence or transformation in this new technological era. Image (and typos) generated by Nano Banana.

Source

The Walrus

Summary

Robert Gibbs reflects on how universities must adapt in an era where AI and digital tools erode their traditional role as repositories of knowledge. With information universally accessible, the value of higher education lies less in storing facts and more in fostering judgement, interpretation, and critical inquiry. Drawing on experiences at the University of Toronto’s Jackman Humanities Institute, Gibbs argues the humanities’ long tradition of commentary, reflection, and editing can guide universities in cultivating discernment and slow, thoughtful learning. In the face of rapid information flows and AI-driven content, universities must champion practices that value reflection, contextual reading, and intellectual judgement over efficiency.

Key Points

  • Universities can no longer justify themselves as mere repositories of information, since knowledge is now widely accessible.
  • The mission should shift to developing interpretation, critique, and judgement as central student skills.
  • Humanities traditions of commentary, redaction, and reflection offer models for navigating digital and AI contexts.
  • Libraries and collaborative digital humanities projects show how to combine old scholarly methods with new technology.
  • In an era of speed and distraction, universities should foster slower, deeper reading and writing to cultivate discernment.

Keywords

URL

https://thewalrus.ca/universities-in-the-age-of-ai/

Summary generated by ChatGPT 5


Sometimes We Resist AI for Good Reasons


In a classic, wood-paneled library, five serious-looking professionals (three female, two male) stand behind a long wooden table laden with books. A large, glowing red holographic screen hovers above the table, displaying 'AI: UNETHICAL BIAS - DATA SECURITY - LOSS THE CRITICAL THOUGHT' and icons representing ethical concerns. The scene conveys a thoughtful resistance to AI based on justified concerns. Generated by Nano Banana.
In an era where AI is rapidly integrating into all aspects of life, this image powerfully illustrates that ‘sometimes we resist AI for good reasons.’ It highlights critical concerns such as unethical biases, data security vulnerabilities, and the potential erosion of critical thought, underscoring the importance of cautious and principled engagement with artificial intelligence. Image (and typos) generated by Nano Banana.

Source

The Chronicle of Higher Education

Summary

Kevin Gannon argues that in crafting AI policies for universities, it’s vital to include voices critical of generative AI, not just technophiles. He warns that the rush to adopt AI (for grading, lesson planning, etc.) often ignores deeper concerns about academic values, workloads, and epistemic integrity. Institutions repeatedly issue policies that are outdated almost immediately, and students feel caught in the gap between policy and practice. Gannon’s call: resist the narrative of inevitability, listen to sceptics, and create policies rooted in local context, shared governance, and respect for institutional culture.

Key Points

  • Many universities struggle to keep AI policies updated in face of fast technical change.
  • Students often receive blurry or conflicting guidance on when AI use is allowed.
  • The push for AI adoption is framed as inevitable, marginalising critics who raise valid concerns.
  • Local context matters deeply — uniform policies rarely do justice to varied departmental needs.
  • Including dissenting voices improves policy legitimacy and avoids blind spots.

Keywords

URL

https://www.chronicle.com/article/sometimes-we-resist-ai-for-good-reasons

Summary generated by ChatGPT 5


Generic AI cannot capture higher education’s unwritten rules


Five academics, dressed in business attire, are seated around a chess board on a wooden table in a traditional library, with books and papers. Above them, a large holographic screen displays 'AI - UNWRITTEN RULES: ACCESS DENIED' and 'CONTEXTUAL NUANCE: UNA'ILABLE', surrounded by data. Two thought bubbles above the central figure read 'HUMAN SHARED UNDERSTAIN' and 'SHARE 'ID UNDERSTANHINP'. The scene symbolizes AI's inability to grasp the subtle, unwritten rules of higher education. Generated by Nano Banana.
While AI excels at processing explicit data, it fundamentally struggles to grasp the nuanced, ‘unwritten rules’ that govern higher education. This image illustrates the critical gap where generic AI falls short in understanding the complex social, cultural, and contextual intricacies that define the true academic experience, highlighting the irreplaceable value of human intuition and shared understanding. Image (and typos) generated by Nano Banana.

Source

Wonkhe

Summary

Kurt Barling argues that universities operate not only through formal policies but via tacit, institution-specific norms—corridor conversations, precedents, traditions—that generic AI cannot perceive or replicate. Deploying off-the-shelf AI tools risks flattening institutional uniqueness, eroding identity and agency. He suggests universities co-design AI tools that reflect their values, embed nuance, preserve institutional memory, and maintain human oversight. Efficiency must not come at the cost of hollowing out culture, or letting external systems dictate how universities function.

Key Points

  • Universities depend heavily on tacit norms and culture—unwritten rules that guide decisions and practices.
  • Generic AI, based on broad datasets, flattens nuance and treats institutions as interchangeable.
  • If universities outsource decision-making to black-box systems, they risk losing identity and governance control.
  • A distributed “human-assistive AI” approach is preferable: systems that suggest, preserve memory, and stay under human supervision.
  • AI adoption must not sacrifice culture and belonging for efficiency; sector collaboration is needed to build tools aligned with institutional values.

Keywords

URL

https://wonkhe.com/blogs/generic-ai-cannot-capture-higher-educations-unwritten-rules/

Summary generated by ChatGPT 5


How generative AI is really changing education – by outsourcing the production of knowledge to Big Tech


A classroom-like setting inside a large server room, where students in white lab coats sit at desks with holographic screens projected from their devices. Above them, a neon sign blue logo glows for 'BIG TECH EDUSYNC' is prominent. The students have blank expressions, connected by wires. The scene criticises the outsourcing of education to technology. Generated by Nano Banana.
The rise of generative AI is fundamentally reshaping education, leading to concerns that the critical process of ‘knowledge production’ is being outsourced to Big Tech companies. This image visualises a future where learning environments are dominated by AI, raising questions about autonomy, critical thinking, and the ultimate source of truth in an AI-driven academic landscape. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

This article argues that generative AI is shifting the locus of knowledge production from academic institutions into the hands of Big Tech platforms. As students and educators increasingly rely on AI tools, the power to define what counts as knowledge — what is true, what is cited, what is authoritative — is ceded to private firms. That shift risks marginalising critical thinking, local curricula, and disciplinary expertise. The author calls for reclaiming epistemic authority: universities must define their own ways of knowing, educate students not just in content but in evaluative judgement, and negotiate more equitable relationships with AI platforms so that academic integrity and autonomy aren’t compromised.

Key Points

  • Generative AI tools increasingly mediate how knowledge is accessed, curated and presented—Big Tech becomes a gatekeeper.
  • Reliance on AI may weaken disciplinary expertise and the role of scholars as knowledge producers.
  • Students may accept AI outputs uncritically, transferring trust to algorithmic systems over faculty.
  • To respond, higher education must build student literacy in epistemics (how we know what we know) and insist AI remain assistive, not authoritative.
  • Universities should set policy, technical frameworks, and partnerships that protect research norms, attribution, and diverse knowledge systems.

Keywords

URL

https://theconversation.com/how-generative-ai-is-really-changing-education-by-outsourcing-the-production-of-knowledge-to-big-tech-263160

Summary generated by ChatGPT 5