Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University

Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords


Schools in Wales ‘excited but wary’ as teacher workloads cut


A split image contrasting two emotional responses to AI in Welsh schools. On the left, a group of smiling, happy teachers stands around a table with a glowing holographic display showing "TEACHER WORKLOAD REDUCTION" and icons representing administrative tasks, symbolizing excitement. On the right, a group of wary, concerned teachers huddle around a laptop displaying "AI IN CLASSROOMS: BENEFITS & RISKS," with text highlighting "JOB SECURITY?" and "DATA PRIVACY," reflecting their apprehension. The Welsh flag is visible in the background on the left. Image (and typos) generated by Nano Banana.
As artificial intelligence begins to reduce teacher workloads in schools across Wales, educators are experiencing a mix of excitement for the potential benefits and apprehension about the unseen challenges. This image vividly contrasts the initial relief of reduced administrative burdens with the underlying worries about job security, data privacy, and the broader impact of AI on the educational landscape. Image (and typos) generated by Nano Banana.

Source

BBC News

Summary

A new report by Estyn, Wales’s education watchdog, finds that while artificial intelligence is helping teachers save time and reduce administrative workloads, schools remain cautious about its classroom use. Many Welsh teachers now use AI for lesson planning, report writing and tailoring resources for students with additional needs. However, concerns persist around plagiarism, over-reliance, and data ethics. At Birchgrove Comprehensive School in Swansea, staff are teaching pupils to use AI responsibly, balancing innovation with digital literacy. Estyn and the Welsh government both emphasise the need for national guidance and training to ensure AI enhances learning without undermining skills or safety.

Key Points

  • AI is reducing teacher workloads by automating planning and reporting tasks.
  • Estyn warns that schools need clearer guidance for ethical and safe AI use.
  • Pupils are using AI for revision and learning support, often with teacher oversight.
  • Staff report excitement about AI’s potential but remain wary of bias and misuse.
  • The Welsh government has committed to training and national policy development.

Keywords

URL

https://www.bbc.com/news/articles/c0lkdxpz0dyo

Summary generated by ChatGPT 5


From Detection to Development: How Universities Are Ethically Embedding AI for Learning


In a large, modern university hall bustling with students and professionals, a prominent holographic display presents a clear transition. The left panel, "DETECTION ERA," shows crossed-out symbols for AI detection, indicating a past focus. The right panel, "AI FOR LEARNING & ETHICS," features a glowing brain icon within a shield, representing an "AI INTEGRITY FRAMEWORK" and various applications like personalized learning and collaborative spaces, illustrating a shift towards ethical AI development. Image (and typos) generated by Nano Banana.
Universities are evolving their approach to artificial intelligence, moving beyond simply detecting AI-generated content to actively and ethically embedding AI as a tool for enhanced learning and development. This image visually outlines this critical shift, showcasing how institutions are now focusing on integrating AI within a robust ethical framework to foster personalised learning, collaborative environments, and innovative educational practices. Image (and typos) generated by Nano Banana.

Source

HEPI

Summary

Rather than focusing on detection and policing, this blog argues universities should shift toward ethically embedding AI as a pedagogical tool. Based on research commissioned by Studiosity, evidence shows that when AI is used responsibly, it correlates with improved outcomes and retention—especially for non-traditional students. The blog presents a “conduit” metaphor: AI is like an overhead projector—helpful, but not replacing core learning. A panel at the Universities UK Annual Conference proposed values and guardrails (integrity, equity, transparency, adaptability) to guide institutional policy. The piece calls for sandboxing new tools, centring student support and human judgment in AI adoption.

Key Points

  • The narrative needs to move from detection and restriction to development and support of AI in learning.
  • Independent research found a positive link between guided AI use and student attainment/retention, especially for non-traditional learners.
  • AI should be framed as a conduit (like projectors) rather than a replacement of teaching/learning.
  • A values-based framework is needed: academic integrity, equity, transparency, responsibility, resilience, empowerment, adaptability.
  • Universities should use “sandboxing” (controlled testing) and robust governance rather than blanket bans.

Keywords

URL

https://www.hepi.ac.uk/2025/10/03/from-detection-to-development-how-universities-are-ethically-embedding-ai-for-learning/

Summary generated by ChatGPT 5


How AI is reshaping education – from teachers to students


A split image depicting the impact of AI on education. On the left, a female teacher stands in front of a holographic 'AI POWERED INSTRUCTION' diagram, addressing a group of students. On the right, students are engaged with 'AI LEARNING PARTNER' interfaces, one wearing a VR headset. A central glowing orb with 'EDUCATION TRANSFORMED: AI' connects both sides, symbolizing the pervasive change AI brings to both teaching and learning. Generated by Nano Banana.
From empowering educators with intelligent instruction tools to providing students with personalised AI learning partners, artificial intelligence is fundamentally reshaping every facet of education. This image illustrates the transformative journey, highlighting how AI is creating new dynamics in classrooms and preparing both teachers and learners for a future redefined by technology. Image (and typos) generated by Nano Banana.

Source

TribLIVE

Summary

In this article, educators in a Pennsylvania school district discuss how AI is being woven into teaching practice and student learning—not by replacing teachers, but amplifying their capacity. AI tools like Magic School help teachers personalise lesson plans, adjust reading levels, reduce repetitive tasks, and monitor student use. A “traffic light” system is used to label assignments by allowed level of AI. New teachers are required to learn AI tools; students begin learning about AI ethically from early grades. The district emphasises that AI should not replace human work but free teachers to focus more on interpersonal and high-order thinking.

Key Points

  • Magic School is used to adapt assignments by subject, grade, and reading level, giving teachers flexibility.
  • Teachers are being trained and supported in AI adoption via workshops, pilot programs, and guided experiments.
  • A colour-coded “traffic light” system distinguishes when AI is allowed (green), allowed for some parts (yellow), or disallowed (red).
  • Starting in early grades, students are taught what AI is and how to use it ethically; higher grades incorporate more active use.
  • The goal: reduce workload on teachers for repetitive tasks so they can devote more energy to student interaction and complex thinking.

Keywords

URL

https://triblive.com/local/regional/heres-how-ai-is-reshaping-education-from-teachers-to-students/

Summary generated by ChatGPT 5