
Source
Business Insider
Summary
Kimberley Hardcastle, assistant professor of business and marketing at Northumbria University, warns that generative AI is not just facilitating plagiarism—it’s encouraging students to outsource their thinking. Based on Anthropic data, about 39 % of student-AI interactions involved creating or polishing academic texts and another 33 % requested direct solutions. Hardcastle argues this is shifting the locus of intellectual authority toward Big Tech, making it harder for students to engage with ambiguity, weigh evidence, or claim ownership of ideas. She urges institutions to focus less on policing misuse, and more on pedagogies that preserve critical thinking and epistemic agency.
Key Points
- 39.3 % of student-AI chats were about composing or revising assignments; 33.5 % requested direct solutions.
- AI output often is accepted uncritically because it presents polished, authoritative language.
- The danger: students come to trust AI explanations over their own reasoned judgement.
- Hardcastle views this as part of a larger shift: tech companies increasingly influence how “knowledge” is framed and delivered.
- She suggests the response should emphasise pedagogy: design modes of teaching that foreground critical thinking over output policing.
Keywords
URL
Summary generated by ChatGPT 5