Universities: GenAI – There’s No Stopping, Start Shaping!

By Frances O’Donnell, Instructional Designer, ATU
Estimated reading time: 8 minutes
A group of diverse higher education professionals and students standing in front of a modern university building, engaged in a collaborative discussion around a glowing digital interface displaying the Gen.S.A.R. framework. In the foreground, traditional open books and a graduation cap are intertwined with glowing neural network nodes, symbolizing the integration of generative AI with traditional academic foundations. Image (and typos) generated by Nano Banana.
Moving from debate to action: Implementing a cross-departmental strategy to shape the future of GenAI in higher education. Image (and typos) generated by Nano Banana.

Debate continues to swing between those pushing rapid adoption and those advocating caution of GenAI, for example, panic about “AI taking over the classroom” and outrage at Big Tech’s labour practices. Both are important, but are these and other concerns causing inaction? In many cases, we are quietly watching students hand their data and their critical thinking over to the very Big Tech companies we are arguing against (while we still fly on holidays, stream on smart TVs and buy the same devices from the same companies). Pretending that GenAI in education is the one place we finally draw an ethical line, while doing nothing to make its use safer or more equitable, is not helpful. By all means, keep debating, but not at the cost of another three or four cohorts.

This opinion post suggests three things universities should address now: a minimal set of GenAI functions that should be available to staff and students, and a four-step teaching process to help lecturers rethink their role with GenAI.

Three things universities need to address now

1. Tell students and staff clearly what they can use (Déjà vu?)

Students and staff deserve clarity on which GenAI tools they have access to, what they can use them for and which ones are institutionally supported. Has your university provided this? No more grey areas or “ask your lecturer”. If people do not know this, it pushes GenAI use into secrecy. That secrecy hands more power to Big Tech to extract data and embed bias, while also quietly taking away their cognitive ability.

2. Untangle GenAI from “academic integrity”

Tightly linking GenAI to academic integrity was a mistake! It has created an endless debate about whether to permit or prohibit GenAI, which pushes use further underground. At this point, there is no real equity and no real academic integrity. Use of GenAI cannot simply be stopped, proved or disproved, so pretending otherwise, while holding endless anti‑AI discussions, will not lead to a solution. There is no putting GenAI back in the bottle!

3. Treat GenAI as a shared responsibility

GenAI affects curriculum design, assessment, student support, digital literacy, employability, libraries, disability support, IT, policy and everywhere in between. It cannot sit on the shoulders of one department or lead. Every university needs a cross‑departmental AI strategy that includes the student union, academic leads, IT, the data protection office, careers, student support, administration and teaching and learning personnel. Until leadership treats GenAI as systemic, lecturers will keep firefighting contradictions and marking assignments they know were AI-generated. Bring everyone to the table, and don’t adjourn until decisions have been made on students and staff clarity (even if this clarity is dynamic in nature – do not continue to leave them navigating this alone for another three years).

What GenAI functions should be provided

At a minimum, institutions should give safe, equitable access to:

  • A campus-licensed GenAI model
    One model for all staff and students to ask questions, draft, summarise, explain and translate text, including support for multilingual learners.
  • Multimodal creation tools
    Tools to create images, audio, video (including avatars), diagrams, code, etc., with clear ethical and legal guidance.
  • Research support tools
    Tools to support research, transcribing, coding, summaries, theme mapping, citations, etc., that reinforce critical exploration
  • Assessment and teaching design tools
    Tools to draft examples, case variations, rubrics, flashcards, questions, etc., are stored inside institutional systems.
  • Custom agents
    Staff create and share custom AI agents configured for specific purposes: subject-specific scaffolding for students, or workflow agents for planning, resource creation and content adaptation. Keep interactions within institutional systems.
  • Accessibility focused GenAI
    Tools that deliver captions, plain language rewrites, alt text and personalised study materials. Many institutions already have these in place.

Safer GenAI tools for exploration, collaboration and reflection. Now what do they do with them? This is where something like Gen.S.A.R comes in. A potential approach where staff and students explore together with GenAI, and one that is adaptable to different contexts/disciplines.

Gen.S.A.R.

Gen.S.A.R. is simply a suggested starting point; there is no magic wand, but this may help to ignite practical ideas from others. It suggests a shift from passive content delivery to constructivist and experiential learning.

  • GenAI exploration and collaborative knowledge construction
  • Scrutinise and share
  • Apply in real-world contexts with a low or no-tech approach
  • Reflect and evaluate

It keeps critical thinking, collaboration and real-world application at the centre, with GenAI as a set of tools rather than a replacement for learning. Note: GenAI is a set of tools, not a human!

Phase 1: GenAI, constructing, not copy-pasting

Students use GenAI, the lecturer, and reputable sources to explore a concept or problem linked to the learning outcomes. Lecturers guide this exploration as students work individually or in groups. With ongoing lecturer input, students may choose whether to use GenAI or other sources, but all develop an understanding of GenAI’s role in learning.

Phase 2: Scrutinise and Share

The second phase focuses on scrutinising and sharing ideas with others, not just presenting them as finished facts. Students bring GenAI outputs, reputable sources and their own thinking into dialogue. They interrogate evidence, assumptions and perspectives in groups or class discussion (social constructivism, dialogic teaching). The lecturer – the content expert – oversees this process and identifies the errors, draws attention to the errors and helps students clarify GenAI outputs.

Phase 3: Apply, low-tech, real-world

Screens step back. Students apply what they have discovered in low or no-tech ways: diagrams, mind maps, zines, prototypes, role plays, scenarios. They connect what they discovered to real contexts and show understanding through doing, making, explaining and practical application.

Phase 4: Reflect, evaluate and look forward

Students then evaluate and reflect on both their learning process and the role of GenAI. Using written, audio, video or visual reflections, they consider what they learned, how GenAI supported or distorted that learning and how this connects to their future. This reflective work, combined with artefacts from earlier phases, supports peer, self and lecturer assessment and moves us towards competency and readiness-based judgements.

Resourcing Gen.S.A.R. Yes, smaller class sizes and support would be required, but aspects of this can be implemented now (and are being implemented by some already). Time shifts to facilitation, co-learning, process feedback, and authentic evaluation (less three-thousand-word essays). This approach is not perfect but at least it’s an approach and one that draws on long‑standing learning theories, including constructivism, social constructivism, experiential learning, and traditions in inclusive and competency‑based education.

There’s No Stopping It, Time to Shape It

GenAI is not going away. Exploitative labour practices, data abuse and profit motives are real (and not exclusive to AI), and naming these harms is essential, but continuing to let these debates dominate any movement is not helpful. Universities can choose to lead (and I commend, not condemn, those who already are) with clear guidance, equitable access to safe GenAI tools and learning design. The alternative is all the risks associated with students and staff relying on personal accounts and workarounds.

For the integrity of education itself, it is time to translate debates into action. The genie is not going back in the bottle, and our profit-driven society is not only shaped by Big Tech but also by the everyday choices of those of us living privileged lives in westernised societies. It is time to be honest about our own complicity, to step out of the ivory tower and work with higher education students to navigate the impact GenAI is having on their lives right now.

Note: My views on GenAI for younger learners is very different; the suggestions here focus specifically on higher education.

Frances O’Donnell

Instructional Designer
ATU

Exploring the pros and cons of AI & GenAI in education, and indeed in society. Currently completing a Doctorate in Education with a focus on AI & Emerging Technologies.

Passionate about the potential education has to develop one’s self-confidence and self-worth, but frustrated by the fact it often does the opposite. AI has magnified our tendency to overassess and our inability to truly move away from rote learning.

Whether I’m carrying out the role of an instructional designer, or delivering workshops or researching, I think we should work together to make education a catalyst of change where learners are empowered to become confident as well as socially and environmentally conscious members of society. With or without AI, let’s change the perception of what success looks like for young people.

Keywords


Why Students Shouldn’t Use AI, Even Though It’s OK for Teachers


A split image showing a frustrated male student on the left, with text "AI USE FOR STUDENTS: PROHIBITED," and a smiling female teacher on the right, with text "AI USE FOR TEACHERS: ACCEPTED." Both are working on laptops in a contrasting light. Image (and typos) generated by Nano Banana.
The double standard: Exploring why AI use might be acceptable for educators yet detrimental for students’ learning and development. Image (and typos) generated by Nano Banana.

Source

Edutopia

Summary

History and journalism teacher David Cutler argues that while generative AI can meaningfully enhance teachers’ feedback and efficiency, students should not use it unsupervised. Teachers possess the critical judgment to evaluate AI outputs, but students risk bypassing essential cognitive processes and genuine understanding. Cutler likens premature AI use to handing a calculator to someone who hasn’t learned basic arithmetic. He instead promotes structured, transparent use—AI for non-assessed learning or teacher moderation—while continuing to teach critical thinking and writing through in-class work. His stance reflects both ethical caution and pragmatic optimism about AI’s potential to support, not supplant, human learning.

Key Points

  • Teachers can use AI to improve feedback, fairness, and grading efficiency.
  • Students lack the maturity and foundational skills for unsupervised AI use.
  • In-class writing fosters integrity, ownership, and authentic reasoning.
  • Transparent teacher use models responsible AI practice.
  • Slow, deliberate adoption best protects student learning and trust.

Keywords

URL

https://www.edutopia.org/article/why-students-should-not-use-ai/

Summary generated by ChatGPT 5


The Lecturers Learning to Spot AI Misconduct


Four serious and focused lecturers/academics (two men, two women) are gathered around a table in a dimly lit, high-tech setting. They are looking at a large, glowing blue holographic screen that displays complex text, code, and highlights, with the prominent title "AI MISCONDUCT DETECTION." The screen shows an example of potentially AI-generated text with highlighted sections. Two individuals are actively pointing at the screen, while others are taking notes on laptops and paper. Surrounding the main screen are smaller holographic icons representing documents and a magnifying glass, symbolizing investigation and analysis. Image (and typos) generated by Nano Banana.
As AI tools become more sophisticated, the challenge of maintaining academic integrity intensifies. This image depicts lecturers undergoing specialised training to hone their skills in identifying AI-generated misconduct, ensuring fairness and originality in student work. Image (and typos) generated by Nano Banana.

Source

BBC News

Summary

Academics at De Montfort University (DMU) in Leicester are receiving specialist training to identify when students misuse artificial intelligence in coursework. The initiative, led by Dr Abiodun Egbetokun and supported by the university’s new AI policy, seeks to balance ethical AI use with maintaining academic integrity. Lecturers are being taught to spot linguistic “markers” of AI generation, such as repetitive phrasing or Americanised language, though experts acknowledge that detection is becoming increasingly difficult. DMU encourages students to use AI tools to support critical thinking and research, but presenting AI-generated work as one’s own constitutes misconduct. Staff also highlight the flaws of AI detection software, which has produced false positives, prompting calls for education over punishment. Students, meanwhile, recognise both the value and ethical boundaries of AI in their studies and future professions.

Key Points

  • DMU lecturers are being trained to recognise signs of AI misuse in student work.
  • The university’s policy allows ethical AI use for learning support but bans misrepresentation.
  • Detection focuses on linguistic patterns rather than unreliable software tools.
  • Staff warn that false accusations can harm students as much as confirmed misconduct.
  • Educators stress fostering AI literacy and integrity rather than “catching out” students.
  • Students value AI for translation, study support, and clinical applications but accept clear ethical limits.

Keywords

URL

https://www.bbc.com/news/articles/c2kn3gn8vl9o

Summary generated by ChatGPT 5


Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Estimated reading time: 5 minutes
A laptop displays a "Microsoft Copilot" interface with two main sections: "Force Analysis: Opportunities vs. Challenges" showing a line graph, and "Pedagogical Copilot Agent" with an icon representing a graduation cap, books, and other educational symbols. In the background, a blurred image of students in a secondary school classroom is visible. Image (and typos) generated by Nano Banana.
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.

At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.

The Forces at Play

Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:

  • The need to equip students and staff with future-ready skills.
  • Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
  • National AI strategies that frame this as an essential area for investment.
  • The promise of personalised learning and workload reduction.
  • A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.

But there are also significant restraints:

  • Ongoing academic integrity concerns.
  • GDPR and data privacy ambiguity.
  • Patchy CPD and teacher digital confidence.
  • Digital equity and access challenges.
  • The energy cost of AI at scale.
  • Polarisation of educator opinion, and staff change fatigue.

The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.

Educator-Led AI Design

One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.

That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.

So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.

Built-in Pedagogical Skills

Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:

  • Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
  • Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
  • Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
  • Encouraging subject language use – reinforcing terminology in context.
  • Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
  • Supporting responsible AI use – modelling ethical engagement and critical AI literacy.

These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.

The Case for Copilot

We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.

The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.

And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.

Lessons Learned So Far

From our pilots, a few lessons stand out:

  • Moving to an in-tenancy model was essential for trust.
  • Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
  • Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
  • AI integration is as much about culture, training, and mindset as it is about the underlying technology.

Looking Ahead

As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.

The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.

The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.

We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.

Jonathan Sansom

Director of Digital Strategy,
Hills Road Sixth Form College, Cambridge

Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…


Software / Services Used

Microsoft Copilot Studiohttps://www.microsoft.com/en-us/microsoft-365-copilot/microsoft-copilot-studio

Keywords


Greece Launches “AI in Schools” Program to Bring ChatGPT Edu into Classrooms


A bright and modern elementary school classroom where a female teacher stands in front of a large digital screen, pointing at a map of Greece with an "AI IN SCHOOLS PROGRAM" logo. The screen also displays the Greek flag and "ΕΛΛΑΔΑ: AI ΣΤΑ ΣΧΟΛΕΙΑ" (Greece: AI in Schools). Rows of young students are seated at individual desks, each actively engaged with a laptop displaying the "ChatGPT Edu" interface. The classroom has large windows, plants, and classical artwork, blending traditional and modern educational elements. Image (and typos) generated by Nano Banana.
Greece is making a significant leap into the future of education by launching its “AI in Schools” program, introducing ChatGPT Edu into classrooms nationwide. This initiative aims to equip students with cutting-edge AI tools, fostering innovation and preparing them for a technology-driven world. Image (and typos) generated by Nano Banana.

Source

Greek Reporter

Summary

Greece has announced a nationwide initiative, AI in Schools, making it one of the first European countries to formally integrate generative AI into public education. Beginning with a pilot in December 2025, the programme will introduce ChatGPT Edu—OpenAI’s education-focused platform—into 20 high schools. Led by The Tipping Point in Education and funded by the Onassis Foundation, the initiative aims to enhance AI literacy among teachers and students while maintaining ethical standards and data privacy. The rollout includes four phases: teacher training, pilot implementation, student participation, and full integration by 2027. The Ministry of Education has established strict GDPR-compliant data protocols, ensuring that AI supports creativity, collaboration, and critical thinking without replacing teachers’ central role in learning.

Key Points

  • Greece will pilot ChatGPT Edu in 20 high schools from December 2025.
  • The project is run by The Tipping Point in Education and funded by the Onassis Foundation.
  • A four-phase rollout prioritises teacher training, student engagement, and responsible AI use.
  • ChatGPT Edu offers secure, ad-free, GDPR-compliant tools for schools.
  • The initiative promotes AI literacy, ethical awareness, and digital innovation.
  • Teachers remain central to guiding creative and critical classroom use of AI.

Keywords

URL

https://greekreporter.com/2025/10/18/greece-ai-schools-program-chatgpt-edu-classrooms/

Summary generated by ChatGPT 5