Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Estimated reading time: 5 minutes
A laptop displays a "Microsoft Copilot" interface with two main sections: "Force Analysis: Opportunities vs. Challenges" showing a line graph, and "Pedagogical Copilot Agent" with an icon representing a graduation cap, books, and other educational symbols. In the background, a blurred image of students in a secondary school classroom is visible. Image (and typos) generated by Nano Banana.
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.

At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.

The Forces at Play

Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:

  • The need to equip students and staff with future-ready skills.
  • Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
  • National AI strategies that frame this as an essential area for investment.
  • The promise of personalised learning and workload reduction.
  • A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.

But there are also significant restraints:

  • Ongoing academic integrity concerns.
  • GDPR and data privacy ambiguity.
  • Patchy CPD and teacher digital confidence.
  • Digital equity and access challenges.
  • The energy cost of AI at scale.
  • Polarisation of educator opinion, and staff change fatigue.

The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.

Educator-Led AI Design

One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.

That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.

So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.

Built-in Pedagogical Skills

Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:

  • Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
  • Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
  • Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
  • Encouraging subject language use – reinforcing terminology in context.
  • Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
  • Supporting responsible AI use – modelling ethical engagement and critical AI literacy.

These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.

The Case for Copilot

We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.

The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.

And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.

Lessons Learned So Far

From our pilots, a few lessons stand out:

  • Moving to an in-tenancy model was essential for trust.
  • Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
  • Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
  • AI integration is as much about culture, training, and mindset as it is about the underlying technology.

Looking Ahead

As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.

The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.

The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.

We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.

Jonathan Sansom

Director of Digital Strategy,
Hills Road Sixth Form College, Cambridge

Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…


Software / Services Used

Microsoft Copilot Studiohttps://www.microsoft.com/en-us/microsoft-365-copilot/microsoft-copilot-studio

Keywords


New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University
Estimated reading time: 8 minutes
Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords


3 Things AI Can Do For You, The No-Nonsense Guide

by Dr Yannis
Estimated reading time: 6 minutes


Some people are for it, some people are against it, some people write guidelines about it. What unites most of these ‘factions’ is an almost total inability to use it effectively. Yes, I am talking about AI, the super useful, super intuitive thing that is changing your life by, well, generating some misspelled logos.

This blog post offers something different. Three real things you can do. I will tell you what they are, how to do them, why you should do them, and how much it will cost you. And I am not talking about the things everyone else is talking about.

I know about all this, because ever since ChatGPT launched late 2022, I have been all over it. I have incorporated AI tools into all aspects of my work (even life) and I have built an entire business on the back of it. Here we go then.

Thing Number 1: Updating Your Resources

Academics spend a great deal of time updating things. We update reading lists, handbooks, lecture notes, guides, textbooks, all of the time. Sometimes, we update material in areas we have deep expertise, sometimes we update a handbook in a module that got dumped on us because someone left.
Here is how to leverage technology to make this process faster and less painful. You will need a subscription to ChatGPT (£20+VAT, prices as of September 2025) and access to Gemini (via Google Pro subscription, currently free for a year for students and staff or £18.99/month).

Upload the material you wish to update, for example a chapter from a textbook. You select the deep thinking or research option and ask the bot to conduct a review for updates and recommend changes to your uploaded text. Once this is ready, ask it to incorporate the changes into your text, or input them manually as needed. You then take the updated document and run it through the other model, asking it to check for accuracy.

The result is an updated document, which has been updated, verified and often reworded for clarity. Both models offer references and explain their rationale, so you can verify manually and check every step of the process.

Conclusion? You have updated your resource, and it took you one morning, instead of a week

Thing Number 2: Creating Podcasts

Yes, we can lecture, we can offer tutorials, indeed we can upload pdfs on the VLE. But why stop there? Students benefit from multi-modal forms of learning. The surest way to get the message across is to deliver the same information live, via text, audio and video, multiple times.

What has been stopping us doing this effectively? The answer is that most of us are terrible in front of a camera. Yes, students may appreciate a video podcast, but if you look like the hostage in a ransom request video, you are unlikely to hold their attention.

Here is what to do. Use one of the bots you already subscribe to, ChatGPT, or Gemini (as above) or even copilot (via an institutional licence or £19/month) to turn your lecture notes, book chapter or even recordings of live sessions into a podcast script. You can select length, style, and focus to match your intended audience.

You then go to ElevenLabs ($22+VAT/month) and make a clone of your voice. This sounds scary, but it isn’t. Just one tip, do not believe the ‘5 minutes of recording and you’ll fool your mom’ spiel. You need to find quality recordings of your voice that run for a couple of hours for good, reliable results.

Once you have your voice clone, go to HeyGen ($29+VAT/month) and create your video clone. This can be done (at high spec) by either uploading a green screen recording of yourself of about 20’ or by using a new photo-to-video feature (I know this sounds unlikely, but it works very well).

You are now good to go, having cloned yourself in audio and video. You can bring the two together in HeyGen, feed it your pre-prepared scripts and bingo, you can produce unlimited videos where you narrate your lecture scripts looking like a Hollywood star, and no one needs to expect a ransom note.

Thing Number 3: Generating Student Feedback

Most of us are used to generate student feedback on the basis of proformas and templates that combine expectations on learning outcomes with marking grids and assessment criteria.

What students crave (and keep telling us in surveys such as the NSS in the UK), is personalised feedback. Hardly anyone has the time however to personalise templates to student scripts in a way that is deeply meaningful and helpful. We usually copy-paste the proforma, and stick a line extra with an invitation to ‘come and see me if you have questions’.

The bots described above (ChatGPT, Gemini, Copilot etc) do a fantastic job of adapting templates to individualised student scripts. Yes, this requires you uploading a great deal of university resource and the student scripts to the bot, so more on this below.

And Now The Small-Print

First of all, we’ve racked up quite a bill here. You must be thinking, why should I pay for all this out of my own pocket? The answer is you should not, but even if you did, it won’t be that bad. You don’t need to keep paying the subscriptions post the point you need them (all these are PAYG) and most offer free trials or discounts at the beginning of your subscription. You may even get your university to pay for some of it (for example copilot).

Secondly, aren’t there data protection and privacy concerns? Yes there are. All of the above assumes you either own the resources you are uploading, or you have permission to do so. This is a tall order for most institutions that have an unnecessarily guarded view of AI. Some concern is real. For example, I don’t like the latest iteration of Gemini’s terms which forces third party reviewers onto your content if you want full functionality.

Thirdly, won’t a kid in Texas go without a shower, every time you say thank you to ChatGPT? I choose not to care about the latter. The Americans have bigger fish to fry at the moment.

Here you have it therefore, 3 concrete things you can do, as an academic with AI, with costs explained. Bonus feature? You can use all these tools to launch your own school on social media, like I did!

Dr Ioannis Glinavos

Academic Entrepreneur

Dr Ioannis Glinavos is a legal academic and education innovator with a strong focus on the intersection of AI and EdTech. As a pioneer in integrating artificial intelligence tools into higher education, he has developed engaging, student-centred resources—particularly in law—through podcasts, video content, and interactive platforms. His work explores how emerging technologies can enhance learning outcomes and critical thinking, while keeping students engaged in synchronous and asynchronous content.


Software / Services Used

ChatGPThttps://chatgpt.com/
Google Geminihttps://gemini.google.com/app
ElevenLabs (AI Voice Generator)https://elevenlabs.io/
HeyGen (AI Video Generator)https://www.heygen.com/

Keywords


Something Wicked This Way Comes

by Jim O’Mahony, SFHEA – Munster Technological University
Estimated reading time: 5 minutes
A disheveled male university professor with gray hair and glasses, wearing a tweed jacket, kneels on the floor in his office, looking under a beige armchair with a panicked expression while holding his phone like a remote. A stack of books, a globe, and a whiteboard with equations are visible. Image generated by Nano Banana
The true test of a professor’s intelligence: finding the lost remote control. Image generated by Nano Banana

I remember as a 7-year-old having to hop off the couch at home to change the TV channel. Shortly afterwards, a futuristic-looking device called a remote control was placed in my hand, and since then I have chosen its wizardry over my own physical ability to operate the TV. Why wouldn’t I? It’s reliable, instant, multifunctional, compliant and most importantly… less effort.

The Seduction of Less Effort

Less effort……….as humans, we’re biologically wired for it. Our bodies will always choose energy saving over energy-consuming, whether that’s a physical instance or a cognitive one. It’s an evolutionary aid to conserve energy.

Now, my life hasn’t been impaired by the introduction of a remote control, but imagine for a minute if that remote control replaced my thinking as a 7-year-old rather than my ability to operate a TV. Sounds fanciful, but in reality, this is exactly the world in which our students are now living in.

Within their grasp is a seductive all-knowing technological advancement called Gen AI, with the ability to replace thinking, reflection, metacognition, creativity, evaluative judgement, interpersonal relationships and other richly valued attributes that make us uniquely human.

Now, don’t get me wrong, I’m a staunch flag bearer for this new age of Gen AI and can see the unlimited potential it holds for enhanced learning. Who knows? Someday, it may even solve Bloom’s 2 sigma problem through its promise of personalised learning?

Guardrails for a New Age

However, I also realise that as the adults in the room, we have a very narrow window to put sufficient guardrails in place for our students around its use, and urgent considered governance is needed from University executives.  

Gen AI literacy isn’t the most glamorous term (it may not even be the most appropriate term), but it encapsulates what our priority as educators should be. Learn what these tools are, how they work, what are the limitations, problems, challenges, pitfalls, etc, and how can we use them positively within our professional practice to support rather than replace learning?

Isn’t that what we all strive for? To have the right digital tools matched with the best pedagogical practices so that our students enter the workforce as well-rounded, fully prepared graduates – a workforce by the way, that is rapidly changing, with more than 71% of employers routinely adopting Gen AI 12 months ago (we can only imagine what it is now).

Shouldn’t our teaching practices change then, to reflect the new Gen AI-rich graduate attributes required by employers? Surely, the answer is YES… or is it? There is no easy answer – and perhaps no right answer. Maybe we’ve been presented with a wicked problem – an unsolvable situation where some crusade to resist AI, and others introduce policies to ‘ban the banning’ of AI! Confused anyone?

Rethinking Assessment in a GenAI World

I believe a common-sense approach is best and would have us reimagine our educational programmes with valid, secure and authentic assessments that reward learning both with and without the use of Gen AI.

Achieving this is far from easy, but as a starting point, consider a recent paper from Deakin University, which advocates for structural changes to assessment design along with clearly communicated instructions to students around Gen AI use.

To facilitate a more discursive approach regarding reimagined assessment protocols, some universities are adopting ‘traffic light systems’ such as the AI Assessment scale, which, although not perfect (or the whole solution), at least promotes open and transparent dialogue with students about assessment integrity – and that’s never a bad thing.

The challenge will come from those academics who resist the adoption of Gen AI in education. Whether their reasons relate to privacy, environmental issues, ethics, inherent bias, AGI, autonomous AI or cognitive offloading concerns (all well-intentioned and entirely valid by the way), Higher Ed debates and decision making around this topic in the coming months will be robust and energetic.

Accommodating the fearful or ‘traditionalist educators’ who feel unprepared or unwilling to road-test Gen AI should be a key part of any educational strategy or initiative. Their voices should be heard and their opinions considered – but in return, they also need to understand how Gen AI works.

From Resistance to Fluency

Within each department, faculty, staffroom, T&L department – even among the rows of your students, you will find early adopters and digital champions who are a little further along this dimly lit path to Gen AI enlightenment. Seek them out, have coffee with them, reflect on their wisdom and commit to trialling at least one new Gen AI tool or application each week – here’s a list of 100 to get you started. Slowly build your confidence, take an open course, learn about AI fluency, and benefit from the expertise of others.

I’m not encouraging you to be an AI evangelist, but improving your knowledge and general AI capabilities will make you better able to make more informed decisions for you and your students.

Now, did anyone see where I left the remote control?

Jim O’Mahony

University Professor | Biotechnologist | Teaching & Learning Specialist
Munster Technological University

I am a passionate and enthusiastic University lecturer with over 20 years experience of designing, delivering and assessing undergraduate and postgraduate programmes. My primary focus as an academic is to empower students to achieve their full potential through innovative educational strategies and carefully designed curricula. I embrace the strategic and well-intentioned use of digital tools as part of my learning ethos, and I have been an early adopter and enthusiastic advocate of Artificial Intelligence (AI) as an educational tool.


Links

Jim also runs a wonderful newsletter on LinkedIn
https://www.linkedin.com/newsletters/ai-simplified-for-educators-7366495926846210052/

Keywords