Students’ complicated relationship with AI: ‘It’s inherently going against what college is’


A student stands in a grand, traditional library, looking conflicted between two glowing holographic displays. To the left, a blue 'AI: EFFICIENCY' display shows data and code. To the right, an orange 'COLLEGE: UNDERSTANDING' display hovers over an open book and desk lamp. The image symbolizes the internal conflict students face regarding AI in academia. Generated by Nano Banana.
Navigating the academic world with new AI tools presents a complex dilemma for students. This image illustrates the tension between the efficiency offered by AI and the foundational pursuit of deep understanding inherent to college education. It captures the internal debate students face as technology challenges traditional learning. Image generated by and typos courtesy of Nano Banana.

Source

The Irish Times

Summary

Many students express tension between using generative AI (GenAI) tools like ChatGPT and the traditional values of university education. Some avoid AI because they feel it undermines academic integrity or the effort they invested; others see benefit in using it for organising study, generating ideas, or off-loading mundane parts of coursework. Concerns include fairness (getting better grades for less effort), accuracy of chatbot-generated content, and environmental impact. Students also worry about loss of critical thinking and the changing nature of assignments as AI becomes more common. There is a call for clearer institutional guidelines, more awareness of policies, and equitable access and use.

Key Points

  • Using GenAI can feel like “offloading work,” conflicting with the idea of self-learning which many students believe defines college life.
  • Students worry about fairness: those who use AI may gain advantage over those who do not.
  • Accuracy is a concern: ChatGPT sometimes provides false information; students are aware of this risk.
  • Some students avoid using AI to avoid suspicion or accusation of cheating, even when not using it.
  • Others find helpful uses: organising references, creating study timetables, acting as a “second pair of eyes” or “study companion.”

Keywords

URL

https://www.irishtimes.com/life-style/people/2025/09/20/students-complicated-relationship-with-ai-chatbots-its-inherently-going-against-what-college-is/

Summary generated by ChatGPT 5


AI Detectors in Education


Source

Associate Professor Mark A. Bassett

Summary

This report critically examines the use of AI text detectors in higher education, questioning their accuracy, fairness, and ethical implications. While institutions often adopt detectors as a visible response to concerns about generative AI in student work, the paper highlights that their statistical metrics (e.g., false positive/negative rates) are largely meaningless in real-world educational contexts. Human- and AI-written text cannot be reliably distinguished, making detector outputs unreliable as evidence. Moreover, reliance on detectors risks reinforcing inequities: students with access to premium AI tools are less likely to be flagged, while others face disproportionate scrutiny.

Bassett argues that AI detectors compromise fairness and transparency in academic integrity processes. Comparisons to metal detectors, smoke alarms, or door locks are dismissed as misleading, since those tools measure objective, physical phenomena with regulated standards, unlike the probabilistic guesswork of AI detectors. The report stresses that detector outputs shift the burden of proof unfairly onto students, often pressuring them into confessions or penalising them based on arbitrary markers like writing style or speed. Instead of doubling down on flawed tools, the focus should be on redesigning assessments, clarifying expectations, and upholding procedural fairness.

Key Points

  • AI detectors appear effective but offer no reliable standard of evidence.
  • Accuracy metrics (TPR, FPR, etc.) are meaningless in practice outside controlled tests.
  • Detectors unfairly target students without addressing systemic integrity issues.
  • Reliance risks inequity: affluent or tech-savvy students can evade detection more easily.
  • Using multiple detectors or comparing student work to AI outputs reinforces bias, not evidence.
  • Analogies to locks, smoke alarms, or metal detectors are misleading and invalid.
  • Procedural fairness demands that institutions—not students—carry the burden of proof.
  • False positives have serious consequences for students, unlike benign fire alarm errors.
  • Deterrence through fear undermines trust and shifts education toward surveillance.
  • Real solutions lie in redesigning assessment practices, not deploying flawed detection tools.

Conclusion

AI detectors are unreliable, unregulated, and ethically problematic as tools for ensuring academic integrity. Rather than treating detector outputs as evidence, institutions should prioritise fairness, transparency, and assessment redesign. Ensuring that students learn and are evaluated equitably requires moving beyond technological quick fixes toward principled, values-based approaches.

Keywords

URL

https://drmarkbassett.com/assets/AI_Detectors_in_education.pdf

Summary generated by ChatGPT 5