Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


Academic Libraries Embrace AI


A grand, traditional academic library transformed with futuristic technology. Holographic interfaces displaying data and robotic arms extend from bookshelves. Students use laptops and VR headsets, while a central figure at a desk oversees a glowing AI monolith, symbolizing the integration of AI. Image (and typos) generated by Nano Banana.
The future of learning: Academic libraries are evolving into hubs where traditional knowledge meets cutting-edge AI, enhancing research and access to information. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

A global Clarivate survey of more than 2,000 librarians across 109 countries shows that artificial intelligence adoption in libraries is accelerating, particularly within academic institutions. Sixty-seven percent of libraries are exploring or implementing AI, up from 63 percent in 2024, with academic libraries leading the trend. Their priorities include supporting student learning and improving content discovery. Libraries that provide AI training, resources, and leadership encouragement report the highest success and optimism. However, adoption and attitudes vary sharply by region—U.S. librarians remain the least optimistic—and by seniority, with senior leaders expressing greater confidence and favouring administrative applications.

Key Points

  • 67% of libraries are exploring or using AI, up from 63% in 2024.
  • Academic libraries lead in adoption, focusing on student engagement and learning.
  • AI training and institutional support drive successful implementation.
  • Regional differences persist, with U.S. librarians least optimistic (7%).
  • Senior librarians show higher confidence and prefer AI for administrative efficiency.

Keywords

URL

https://www.insidehighered.com/news/quick-takes/2025/10/31/academic-libraries-embrace-ai

Summary generated by ChatGPT 5


Where Does Human Thinking End and AI Begin? An AI Authorship Protocol Aims to Show the Difference


A split image contrasting human and AI cognitive processes. On the left, a woman writes, surrounded by concepts like "HUMAN INTUITION" and "ORIGINAL THOUGHT." On the right, a man works at a computer, with "AI GENERATION" and "COMPUTATIONAL LOGIC" displayed. A central vertical bar indicates an "AUTHORSHIP PROTOCOL: 60% HUMAN / 40% AI." Image (and typos) generated by Nano Banana.
Decoding authorship: A visual representation of the intricate boundary between human creativity and AI generation, highlighting the need for protocols to delineate their contributions. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Eli Alshanetsky, a philosophy professor at Temple University, warns that as AI-generated writing grows increasingly polished, the link between human reasoning and authorship is at risk of dissolving. To preserve academic and professional integrity, his team is piloting an “AI authorship protocol” that verifies human engagement during the creative process without resorting to surveillance or detection. The system embeds real-time reflective prompts and produces a secure “authorship tag” confirming that work aligns with specified AI-use rules. Alshanetsky argues this approach could serve as a model for ensuring accountability and trust across education, publishing, and professional fields increasingly shaped by AI.

Key Points

  • Advanced AI threatens transparency around human thought in writing and decision-making.
  • A new authorship protocol links student output to authentic reasoning.
  • The system uses adaptive AI prompts and verification tags to confirm engagement.
  • It avoids intrusive monitoring by building AI-use terms into the submission process.
  • The model could strengthen trust in professions dependent on human judgment.

Keywords

URL

https://theconversation.com/where-does-human-thinking-end-and-ai-begin-an-ai-authorship-protocol-aims-to-show-the-difference-266132

Summary generated by ChatGPT 5


This Professor Let Half His Class Use AI. Here’s What Happened


A split classroom scene with a professor in the middle, presenting data. The left side, labeled "GROUP A: WITH AI," shows disengaged students with "F" grades. The right side, labeled "GROUP B: NO AI," shows engaged students with "A+" grades, depicting contrasting outcomes of AI use in a classroom experiment. Image (and typos) generated by Nano Banana.
An academic experiment unfolds: Visualizing the stark differences in engagement and performance between students who used AI and those who did not, as observed by one professor. Image (and typos) generated by Nano Banana.

Source

Gizmodo

Summary

A study by University of Massachusetts Amherst professor Christian Rojas compared two sections of the same advanced economics course—one permitted structured AI use, the other did not. The results revealed that allowing AI under clear guidelines improved student engagement, confidence, and reflective learning but did not affect exam performance. Students with AI access reported greater efficiency and satisfaction with course design while developing stronger habits of self-correction and critical evaluation of AI outputs. Rojas concludes that carefully scaffolded AI integration can enrich learning experiences without fostering dependency or academic shortcuts, though larger studies are needed.

Key Points

  • Structured AI use increased engagement and confidence but not exam scores.
  • Students used AI for longer, more focused sessions and reflective learning.
  • Positive perceptions grew regarding efficiency and instructor quality.
  • AI integration encouraged editing, critical thinking, and ownership of ideas.
  • Researchers stress that broader trials are required to validate results.

Keywords

URL

https://gizmodo.com/this-professor-let-half-his-class-use-ai-heres-what-happened-2000678960

Summary generated by ChatGPT 5


AI: Are we empowering students – or outsourcing the skills we aim to cultivate?


A stark split image contrasting two outcomes of AI in education, divided by a jagged white lightning bolt. The left side shows a diverse group of three enthusiastic students working collaboratively on laptops, with one student raising their hands in excitement. Above them, a vibrant, glowing display of keywords like "CRITICAL THINKING," "CREATIVITY," and "COLLABORATION" emanates, surrounded by data and positive learning metrics. The right side shows a lone, somewhat disengaged male student working on a laptop, with a large, menacing robotic hand hovering above him. The robot hand has glowing red lights and is connected to a screen filled with complex, auto-generated data, symbolizing the automation of tasks and potential loss of human skills. Image (and typos) generated by Nano Banana.
The rise of AI in education presents a crucial dichotomy: are we using it to truly empower students and cultivate essential skills, or are we inadvertently outsourcing those very abilities to algorithms? This image visually explores the two potential paths for AI’s integration into learning, urging a thoughtful approach to its implementation. Image (and typos) generated by Nano Banana.

Source

The Irish Times

Summary

Jean Noonan reflects on the dual role of artificial intelligence in higher education—its capacity to empower learning and its risk of eroding fundamental human skills. As AI becomes embedded in teaching, research, and assessment, universities must balance innovation with integrity. AI literacy, she argues, extends beyond technical skills to include ethics, empathy, and critical reasoning. While AI enhances accessibility and personalised learning, over-reliance may weaken originality and authorship. Noonan calls for assessment redesigns that integrate AI responsibly, enabling students to learn with AI rather than be replaced by it. Collaboration between academia, industry, and policymakers is essential to ensure education cultivates judgment, creativity, and moral awareness. Echoing Orwell’s warning in 1984, she concludes that AI should enhance, not diminish, the intellectual and linguistic richness that defines human learning.

Key Points

  • AI literacy must combine technical understanding with ethics, empathy, and reflection.
  • Universities are rapidly adopting AI but risk outsourcing creativity and independent thought.
  • Over-reliance on AI tools can blur authorship and weaken critical engagement.
  • Assessment design should promote ethical AI use and active, independent learning.
  • Collaboration between universities and industry can align innovation with responsible practice.
  • Education must ensure AI empowers rather than replaces essential human skills.

Keywords

URL

https://www.irishtimes.com/ireland/education/2025/10/29/ai-are-we-empowering-students-or-outsourcing-the-skills-we-aim-to-cultivate/

Summary generated by ChatGPT 5