Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


AI as the Next Literacy


In a grand, columned lecture hall filled with students working on glowing laptops, a female professor stands at the front, gesturing towards a massive holographic screen. The screen is framed by two digital-circuitry columns and displays "THE NEW LITERACY" at its center. To the left, "Reading & Writing" is shown with traditional book icons, while to the right, "AI & CODING" is represented with connected nodes and circuits, symbolizing the evolution of foundational skills. Image (and typos) generated by Nano Banana.
Just as reading and writing have long been fundamental literacies, proficiency in Artificial Intelligence is rapidly emerging as the next essential skill. This image envisions a future where understanding AI, its principles, and its applications becomes a cornerstone of education, preparing individuals to navigate and thrive in an increasingly technologically advanced world. Image (and typos) generated by Nano Banana.

Source

Psychology Today

Summary

The article argues that as AI becomes pervasive, society is developing a new kind of literacy—not just how to read and write, but how to prompt, evaluate, and iterate with AI systems. AI extends our reach like a tool or “racket” in sport, but it can’t replace foundational skills like perception, language, and meaning making. The author warns that skipping fundamentals (critical thinking, writing, reasoning) risks hollowing out our capacities. In practice, education should blend traditional learning (drafting essays, debugging code) with AI-assisted revision and engagement, treating AI as augmentation, not replacement.

Key Points

  • AI literacy involves encoding intent → prompt design, interpreting output, iteration.
  • Just as literacy layered on speaking/listening, AI layers on existing cognitive skills.
  • Overreliance on AI without grounding in fundamentals weakens human capabilities.
  • Classrooms might require initial manual drafts or debugging before AI enhancement.
  • The challenge: integrate AI into scaffolding so it amplifies thinking rather than replacing it.

Keywords

URL

https://www.psychologytoday.com/us/blog/the-emergence-of-skill/202510/ai-as-the-next-literacy

Summary generated by ChatGPT 5


How task design transforms AI interactions in the classroom


In a bright, modern classroom with large windows overlooking a green campus, a female teacher stands at the front, gesturing towards a large interactive screen. The screen displays "Task Design & AI Interactions," showing comparisons between "Traditional Tasks" and "Transformed AI Tasks" with visual examples. Numerous students are seated at collaborative desks, working on laptops, with some holographic chat bubbles floating around them, indicating AI interaction. Image (and typos) generated by Nano Banana.
The way educators design tasks is becoming a critical factor in shaping effective AI interactions within the classroom. This image illustrates a dynamic learning environment where thoughtful task design guides students in leveraging AI for enhanced learning outcomes, moving beyond traditional methods to truly transform educational engagement. Image (and typos) generated by Nano Banana.

Source

Psychology Today

Summary

The article argues that the way educators frame and structure tasks determines whether AI becomes a thinking crutch or a scaffold for deeper learning. A classroom debate scenario showed how teams assigned different roles—AI user, content evaluator, information gatherer—could distribute cognitive load and enhance engagement. Prompts that ask the AI to “explain your reasoning” nudged students to interrogate output. But without scaffolding, some teams admitted to overreliance and skipping higher-order thinking. Well-designed tasks promoting interaction, reflection, and collaborative interpretation help AI remain a support, not a substitute.

Key Points

  • Role assignment (AI user, evaluator, gatherer) helps distribute cognitive responsibility.
  • Prompt framing (e.g. “explain your reasoning”) can push AI away from surface responses.
  • Debate structure (real-time questioning) adds social accountability and forces adaptation.
  • Without support, some students fall into dependency, skipping critical thought.
  • The design of tasks—interaction, reflection, scaffolding—is central to ensuring AI enhances rather than replaces human thinking.

Keywords

URL

https://www.psychologytoday.com/ie/blog/in-one-lifespan/202509/how-task-design-transforms-ai-interactions-in-the-classroom

Summary generated by ChatGPT 5