Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


AI Literacy Is Just Digital and Media Literacy in Disguise


In a modern library setting, a diverse group of four students and a female professor are gathered around a glowing, interactive table displaying "AI LITERACY: DIGITAL & MEDIA LITERACY." Overhead, a holographic overlay connects "DIGITAL LITERACY," "MEDIA LITERACY" (with news icons), and "AI LITERACY SKILLS" (with brain and circuit icons), illustrating the interconnectedness of these competencies. Image (and typos) generated by Nano Banana.
This image visually argues that AI literacy is not an entirely new concept but rather an evolution or “disguise” of existing digital and media literacy skills. It highlights the interconnectedness of understanding digital tools, critically evaluating information, and navigating algorithmic influences, suggesting that foundational literacies provide a strong basis for comprehending and engaging with artificial intelligence effectively. Image (and typos) generated by Nano Banana.

Source

Psychology Today

Summary

Diana E. Graber argues that “AI literacy” is not a new concept but a continuation of long-standing digital and media literacy principles. Triggered by the April 2025 executive order Advancing Artificial Intelligence Education for American Youth, the sudden focus on AI education highlights skills schools should have been teaching all along—critical thinking, ethical awareness, and responsible participation online. Graber outlines seven core areas where digital and media literacy underpin AI understanding, including misinformation, digital citizenship, privacy, and visual literacy. She warns that without these foundations, students face growing risks such as deepfake abuse, data exploitation, and online manipulation.

Key Points

  • AI literacy builds directly on digital and media literacy foundations.
  • An executive order has made AI education a US national priority.
  • Core literacies—critical thinking, ethics, and responsibility—are vital for safe AI use.
  • Key topics include misinformation, cyberbullying, privacy, and online safety.
  • The article urges sustained digital education rather than reactionary AI hype.

Keywords

URL

https://www.psychologytoday.com/us/blog/raising-humans-in-a-digital-world/202510/ai-literacy-is-just-digital-and-media-literacy-in

Summary generated by ChatGPT 5


What is AI slop, and is it the end of civilization as we know it?


A dystopian cityscape is overwhelmed by two colossal, shimmering, humanoid figures made of digital circuits and data, symbolizing AI. From their bodies, a torrent of digital debris, fragmented text, and discarded knowledge cascades onto the streets below, where tiny human figures struggle amidst the intellectual "slop." A giant question mark made of text hovers in the sky, reflecting the central question. Image (and typos) generated by Nano Banana.
The term “AI slop” refers to the deluge of low-quality, often nonsensical content rapidly generated by artificial intelligence, raising urgent questions about its impact on information integrity and human civilization itself. This dramatic image visually encapsulates the overwhelming and potentially destructive nature of AI slop, prompting a critical examination of whether this deluge of digital detritus marks a turning point for humanity. Image (and typos) generated by Nano Banana.

Source

RTE

Summary

The piece introduces AI slop — a term capturing the deluge of low-quality, mass-produced AI content flooding the web. Slop is described as formulaic, shallow, and often misleading—less about intelligence than volume. The article warns this glut of content blurs meaningful discourse, degrades trust in credible sources, and threatens to overwhelm attention economy. While it stops short of doomism, it argues that we must resist normalisation of slop by emphasising critical reading, curation, and human judgment.

Key Points

  • AI slop refers to content generated by AI that is high in volume but low in substance (generic, shallow, noise).
  • This flood of slop threatens to drown out signals: quality writing, expert commentary, local voices.
  • The problem is systemic: the incentives of clicks, cheap content creation, and algorithmic amplification feed its growth.
  • To counteract slop, the article encourages media literacy, fact-checking, and more discerning consumption.
  • Over time, unchecked proliferation could erode trust in digital media and make distinguishing truth from AI noise harder.

Keywords

URL

https://www.rte.ie/culture/2025/1005/1536663-what-is-ai-slop-and-is-it-the-end-of-civilization-as-we-know-it/

Summary generated by ChatGPT 5