Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


Why Even Basic A.I. Use Is So Bad for Students


ALT Text: A distressed student sits at a desk with their head in their hands, surrounded by laptops displaying AI interfaces. Labeled "INTELLECTUAL STAGNATION." Image (and typos) generated by Nano Banana.
The weight of intellectual stagnation: How reliance on AI can hinder genuine learning and critical thinking in students. Image (and typos) generated by Nano Banana.

Source

The New York Times

Summary

Anastasia Berg, a philosophy professor at the University of California, Irvine, contends that even minimal reliance on AI tools threatens students’ cognitive development and linguistic competence. Drawing on her experience of widespread AI use in a moral philosophy course, Berg argues that generative AI erodes the foundational processes of reading, reasoning, and self-expression that underpin higher learning and democratic citizenship. While past technologies reshaped cognition, she claims AI uniquely undermines the human capacity for thought itself by outsourcing linguistic effort. Berg calls for renewed emphasis on tech-free learning environments to protect students’ intellectual autonomy and critical literacy.

Key Points

  • Over half of Berg’s students used AI to complete philosophy exams.
  • AI shortcuts inhibit linguistic and conceptual growth central to thinking.
  • Even “harmless” uses, like summarising, weaken cognitive engagement.
  • Cognitive decline could threaten democratic participation and self-rule.
  • Universities should create tech-free spaces to rebuild reading and writing skills.

Keywords

URL

https://www.nytimes.com/2025/10/29/opinion/ai-students-thinking-school-reading.html

Summary generated by ChatGPT 5


English Professors Take Individual Approaches to Deterring AI Use


A triptych showing three different English professors employing distinct methods to deter AI use. The first panel shows a professor lecturing on critical thinking. The second shows a professor providing personalized feedback on a digital screen. The third shows a professor leading a discussion with creative prompts. Image (and typos) generated by Nano Banana.
Diverse strategies in action: English professors are developing unique and personalised methods to encourage original thought and deter the misuse of AI in their classrooms. Image (and typos) generated by Nano Banana.

Source

Yale Daily News

Summary

Without a unified departmental policy, Yale University’s English professors are independently addressing the challenge of generative AI in student writing. While all interviewed faculty agree that AI undermines critical thinking and originality, their responses vary from outright bans to guided experimentation. Professors Stefanie Markovits and David Bromwich warn that AI shortcuts obstruct the process of learning to think and write independently, while Rasheed Tazudeen enforces a no-tech classroom to preserve student engagement. Playwriting professor Deborah Margolin insists that AI cannot replicate authentic human voice and creativity. Across approaches, faculty emphasise trust, creativity, and the irreplaceable role of struggle in developing genuine thought.

Key Points

  • Yale English Department lacks a central AI policy, favouring academic freedom.
  • Faculty agree AI use hinders original thinking and creative voice.
  • Some, like Tazudeen, impose no-tech classrooms to deter reliance on AI.
  • Others allow limited exploration under clear guidelines and reflection.
  • Consensus: authentic learning requires human engagement and intellectual struggle.

Keywords

URL

https://yaledailynews.com/blog/2025/10/29/english-professors-take-individual-approaches-to-deterring-ai-use/

Summary generated by ChatGPT 5


Homework Is Facing an Existential Crisis: Has AI Made It Pointless?


A split image contrasting traditional homework with AI-influenced study. The left side shows a frustrated teenage boy sitting at a cluttered desk under a lamp, struggling with a textbook and papers, with a large red 'X' overlaid on him, signifying the traditional struggle. The background features a messy bulletin board and bookshelves. The right side shows the same boy relaxed in a modern, blue-lit setting, calmly using a tablet. Above his tablet, a friendly, glowing holographic AI tutor figure appears, surrounded by flowing data, equations, and digital interfaces, representing effortless, AI-assisted learning. Image (and typos) generated by Nano Banana.
As AI revolutionizes learning, traditional homework faces an existential crisis. This image dramatically contrasts the classic struggle with assignments against the ease of AI-assisted learning, raising a fundamental question: has artificial intelligence rendered conventional homework pointless, or simply redefined its purpose? Image (and typos) generated by Nano Banana.

Source

Los Angeles Times

Summary

Howard Blume explores how the rise of artificial intelligence is forcing educators to reconsider the value of homework. According to the College Board, 84 per cent of U.S. high school students now use AI for schoolwork, leading some teachers to abandon homework entirely while others redesign tasks to make AI misuse harder. Educators such as Alyssa Bolden in Inglewood now require handwritten essays to limit AI reliance, while others emphasise in-class mastery over at-home repetition. Experts warn that poorly designed homework, amplified by AI, risks undermining learning and widening inequality. Yet research suggests students still benefit from meaningful, creative assignments that foster independence, time management, and deeper understanding. The article concludes that AI hasn’t made homework obsolete—it has exposed the need for better, more purposeful learning design.

Key Points

  • 84 per cent of U.S. high school students use AI for schoolwork, up from 79 per cent earlier in 2025.
  • Teachers are divided: some have scrapped homework, while others are redesigning it to resist AI shortcuts.
  • AI challenges traditional measures of academic effort and authenticity.
  • Experts urge teachers to create engaging, meaningful assignments that deepen understanding.
  • Poorly designed homework can increase stress and widen learning gaps, particularly across socioeconomic lines.
  • The consensus: students don’t need more homework—they need better homework.

Keywords

URL

https://www.latimes.com/california/story/2025-10-25/homework-useless-existential-crisis-ai

Summary generated by ChatGPT 5


Why Students Shouldn’t Use AI, Even Though It’s OK for Teachers


A split image showing a frustrated male student on the left, with text "AI USE FOR STUDENTS: PROHIBITED," and a smiling female teacher on the right, with text "AI USE FOR TEACHERS: ACCEPTED." Both are working on laptops in a contrasting light. Image (and typos) generated by Nano Banana.
The double standard: Exploring why AI use might be acceptable for educators yet detrimental for students’ learning and development. Image (and typos) generated by Nano Banana.

Source

Edutopia

Summary

History and journalism teacher David Cutler argues that while generative AI can meaningfully enhance teachers’ feedback and efficiency, students should not use it unsupervised. Teachers possess the critical judgment to evaluate AI outputs, but students risk bypassing essential cognitive processes and genuine understanding. Cutler likens premature AI use to handing a calculator to someone who hasn’t learned basic arithmetic. He instead promotes structured, transparent use—AI for non-assessed learning or teacher moderation—while continuing to teach critical thinking and writing through in-class work. His stance reflects both ethical caution and pragmatic optimism about AI’s potential to support, not supplant, human learning.

Key Points

  • Teachers can use AI to improve feedback, fairness, and grading efficiency.
  • Students lack the maturity and foundational skills for unsupervised AI use.
  • In-class writing fosters integrity, ownership, and authentic reasoning.
  • Transparent teacher use models responsible AI practice.
  • Slow, deliberate adoption best protects student learning and trust.

Keywords

URL

https://www.edutopia.org/article/why-students-should-not-use-ai/

Summary generated by ChatGPT 5