Sometimes We Resist AI for Good Reasons


In a classic, wood-paneled library, five serious-looking professionals (three female, two male) stand behind a long wooden table laden with books. A large, glowing red holographic screen hovers above the table, displaying 'AI: UNETHICAL BIAS - DATA SECURITY - LOSS THE CRITICAL THOUGHT' and icons representing ethical concerns. The scene conveys a thoughtful resistance to AI based on justified concerns. Generated by Nano Banana.
In an era where AI is rapidly integrating into all aspects of life, this image powerfully illustrates that ‘sometimes we resist AI for good reasons.’ It highlights critical concerns such as unethical biases, data security vulnerabilities, and the potential erosion of critical thought, underscoring the importance of cautious and principled engagement with artificial intelligence. Image (and typos) generated by Nano Banana.

Source

The Chronicle of Higher Education

Summary

Kevin Gannon argues that in crafting AI policies for universities, it’s vital to include voices critical of generative AI, not just technophiles. He warns that the rush to adopt AI (for grading, lesson planning, etc.) often ignores deeper concerns about academic values, workloads, and epistemic integrity. Institutions repeatedly issue policies that are outdated almost immediately, and students feel caught in the gap between policy and practice. Gannon’s call: resist the narrative of inevitability, listen to sceptics, and create policies rooted in local context, shared governance, and respect for institutional culture.

Key Points

  • Many universities struggle to keep AI policies updated in face of fast technical change.
  • Students often receive blurry or conflicting guidance on when AI use is allowed.
  • The push for AI adoption is framed as inevitable, marginalising critics who raise valid concerns.
  • Local context matters deeply — uniform policies rarely do justice to varied departmental needs.
  • Including dissenting voices improves policy legitimacy and avoids blind spots.

Keywords

URL

https://www.chronicle.com/article/sometimes-we-resist-ai-for-good-reasons

Summary generated by ChatGPT 5


In the Age of AI, Are Universities Doomed?


A group of academic figures and students are seated around a grand, traditional university library table, looking towards a glowing, holographic projection of a human brain with interconnected digital pathways, overlaid with various data points and "AI" labels. The brain appears against a backdrop of a city skyline at dusk. Image (and typos) generated by Nano Banana.
The rapid advancement of Artificial Intelligence prompts a critical question: What is the future of higher education? This image explores the intersection of classic academic settings and cutting-edge AI, contemplating whether universities are on the brink of obsolescence or transformation in this new technological era. Image (and typos) generated by Nano Banana.

Source

The Walrus

Summary

Robert Gibbs reflects on how universities must adapt in an era where AI and digital tools erode their traditional role as repositories of knowledge. With information universally accessible, the value of higher education lies less in storing facts and more in fostering judgement, interpretation, and critical inquiry. Drawing on experiences at the University of Toronto’s Jackman Humanities Institute, Gibbs argues the humanities’ long tradition of commentary, reflection, and editing can guide universities in cultivating discernment and slow, thoughtful learning. In the face of rapid information flows and AI-driven content, universities must champion practices that value reflection, contextual reading, and intellectual judgement over efficiency.

Key Points

  • Universities can no longer justify themselves as mere repositories of information, since knowledge is now widely accessible.
  • The mission should shift to developing interpretation, critique, and judgement as central student skills.
  • Humanities traditions of commentary, redaction, and reflection offer models for navigating digital and AI contexts.
  • Libraries and collaborative digital humanities projects show how to combine old scholarly methods with new technology.
  • In an era of speed and distraction, universities should foster slower, deeper reading and writing to cultivate discernment.

Keywords

URL

https://thewalrus.ca/universities-in-the-age-of-ai/

Summary generated by ChatGPT 5


Frequent AI chatbot use associated with lower grades among computer science students


A young male computer science student sits in a modern lab filled with other students working on computers, looking directly at the viewer with a concerned expression. Above him, a glowing red holographic display shows 'AI CHATBOT DEPENDENCE - LOWER GRADES', featuring a downward-trending graph and an 'F' grade on a document. The scene visually links over-reliance on AI chatbots with declining academic performance. Generated by Nano Banana.
New research indicates a concerning trend: frequent reliance on AI chatbots by computer science students is often associated with lower academic grades. This image captures the visual representation of this finding, suggesting that while AI tools offer convenience, over-dependence may hinder the development of critical problem-solving skills essential for deep learning and success. Image (and typos) generated by Nano Banana.

Source

PsyPost

Summary

A study of 231 first-year computer science students in Estonia finds that more frequent use of AI chatbots correlates with lower performance on programming tests, final exams, and overall course scores. While nearly 80% of students had used an AI assistant at least once, heavier use was associated with lower grades. Interestingly, students’ perceptions of how helpful the tools were did not predict their academic outcome. The data suggests a complex relationship: students struggling may rely more on chatbots, or overreliance might undermine their development of core coding skills.

Key Points

  • 80%+ of students in a programming course reported using AI chatbots; usage patterns varied significantly.
  • Those who used chatbots more often earned lower scores on tests, exams, and overall course standings.
  • Most common uses: debugging code, getting explanations; less common: full code generation.
  • Students cited speed, availability, clear explanations as benefits—but also reported hallucinations and overly advanced or irrelevant responses.
  • The study couldn’t disentangle causation: lower ability might drive more AI use, or AI use might hinder deeper learning.

Keywords

URL

https://www.psypost.org/frequent-ai-chatbot-use-associated-with-lower-grades-among-computer-science-students/

Summary generated by ChatGPT 5


Enacting Assessment Reform in a Time of Artificial Intelligence


Source

Tertiary Education Quality and Standards Agency (TEQSA), Australian Government

Summary

This resource addresses how Australian higher education can reform assessment in response to the rise of generative AI. Building on earlier work (Assessment Reform for the Age of Artificial Intelligence), it sets out strategies that align with the Higher Education Standards Framework while acknowledging that gen AI is now ubiquitous in student learning and professional practice. The central message is that detection alone is insufficient; instead, assessment must be redesigned to assure learning authentically, ethically, and sustainably.

The report outlines three main pathways: (1) program-wide assessment reform, which integrates assessment as a coherent system across degrees; (2) unit/subject-level assurance of learning, where each subject includes at least one secure assessment task; and (3) a hybrid approach combining both. Each pathway carries distinct advantages and challenges, from institutional resourcing and staff coordination to maintaining program coherence and addressing integrity risks. Critical across all approaches is the need to balance immediate integrity concerns with long-term goals of preparing students for an AI-integrated future.

Key Points

  • Generative AI necessitates structural assessment reform, not reliance on detection.
  • Assessments must equip students to participate ethically and critically in an AI-enabled society.
  • Assurance of learning requires multiple, inclusive, and contextualised approaches.
  • Program-level reform provides coherence and alignment but demands significant institutional commitment.
  • Unit-level assurance offers quick implementation but risks fragmentation.
  • Hybrid approaches balance flexibility with systemic assurance.
  • Over-reliance on traditional supervised exams risks reducing authenticity and equity.
  • Critical questions must guide reform: alignment across units, disciplinary variation, and student experience.
  • Assessment must reflect authentic professional practices where gen AI is legitimately used.
  • Ongoing collaboration and evidence-sharing across the sector are vital for sustainable reform.

Conclusion

The report concludes that assessment reform in the age of AI is not optional but essential. Institutions must move beyond short-term fixes and design assessment systems that assure learning, uphold integrity, and prepare students for future professional contexts. This requires thoughtful strategy, collaboration, and a willingness to reimagine assessment as a developmental, systemic, and values-driven practice.

Keywords

URL

https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/enacting-assessment-reform-time-artificial-intelligence

Summary generated by ChatGPT 5