The double standard: Exploring why AI use might be acceptable for educators yet detrimental for students’ learning and development. Image (and typos) generated by Nano Banana.
Source
Edutopia
Summary
History and journalism teacher David Cutler argues that while generative AI can meaningfully enhance teachers’ feedback and efficiency, students should not use it unsupervised. Teachers possess the critical judgment to evaluate AI outputs, but students risk bypassing essential cognitive processes and genuine understanding. Cutler likens premature AI use to handing a calculator to someone who hasn’t learned basic arithmetic. He instead promotes structured, transparent use—AI for non-assessed learning or teacher moderation—while continuing to teach critical thinking and writing through in-class work. His stance reflects both ethical caution and pragmatic optimism about AI’s potential to support, not supplant, human learning.
Key Points
Teachers can use AI to improve feedback, fairness, and grading efficiency.
Students lack the maturity and foundational skills for unsupervised AI use.
In-class writing fosters integrity, ownership, and authentic reasoning.
Transparent teacher use models responsible AI practice.
Slow, deliberate adoption best protects student learning and trust.
As AI tools become more sophisticated, the challenge of maintaining academic integrity intensifies. This image depicts lecturers undergoing specialised training to hone their skills in identifying AI-generated misconduct, ensuring fairness and originality in student work. Image (and typos) generated by Nano Banana.
Source
BBC News
Summary
Academics at De Montfort University (DMU) in Leicester are receiving specialist training to identify when students misuse artificial intelligence in coursework. The initiative, led by Dr Abiodun Egbetokun and supported by the university’s new AI policy, seeks to balance ethical AI use with maintaining academic integrity. Lecturers are being taught to spot linguistic “markers” of AI generation, such as repetitive phrasing or Americanised language, though experts acknowledge that detection is becoming increasingly difficult. DMU encourages students to use AI tools to support critical thinking and research, but presenting AI-generated work as one’s own constitutes misconduct. Staff also highlight the flaws of AI detection software, which has produced false positives, prompting calls for education over punishment. Students, meanwhile, recognise both the value and ethical boundaries of AI in their studies and future professions.
Key Points
DMU lecturers are being trained to recognise signs of AI misuse in student work.
The university’s policy allows ethical AI use for learning support but bans misrepresentation.
Detection focuses on linguistic patterns rather than unreliable software tools.
Staff warn that false accusations can harm students as much as confirmed misconduct.
Educators stress fostering AI literacy and integrity rather than “catching out” students.
Students value AI for translation, study support, and clinical applications but accept clear ethical limits.
by Jonathan Sansom – Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Estimated reading time: 5 minutes
Bridging the gap: This image illustrates how Microsoft Copilot can be leveraged in secondary education, moving from a “force analysis” of opportunities and challenges to the implementation of “pedagogical copilot agents” that assist both students and educators. Image (and typos) generated by Nano Banana.
At Hills Road, we’ve been living in the strange middle ground of generative AI adoption. If you charted its trajectory, it wouldn’t look like a neat curve or even the familiar ‘hype cycle’. It’s more like a tangled ball of wool: multiple forces pulling in competing directions.
The Forces at Play
Our recent work with Copilot Agents has made this more obvious. If we attempt a force analysis, the drivers for GenAI adoption are strong:
The need to equip students and staff with future-ready skills.
Policy and regulatory expectations, from DfE and Ofsted, to show assurance around AI integration.
National AI strategies that frame this as an essential area for investment.
The promise of personalised learning and workload reduction.
A pervasive cultural hype, blending existential narratives with a relentless ‘AI sales’ culture.
But there are also significant restraints:
Ongoing academic integrity concerns.
GDPR and data privacy ambiguity.
Patchy CPD and teacher digital confidence.
Digital equity and access challenges.
The energy cost of AI at scale.
Polarisation of educator opinion, and staff change fatigue.
The result is persistent dissonance. AI is neither fully embraced nor rejected; instead, we are all negotiating what it might mean in our own settings.
Educator-Led AI Design
One way we’ve tried to respond is through educator-led design. Our philosophy is simple: we shouldn’t just adopt GenAI; we must adapt it to fit our educational context.
That thinking first surfaced in experiments on Poe.com, where we created an Extended Project Qualification (EPQ) Virtual Mentor. It was popular, but it lived outside institutional control – not enterprise and not GDPR-secure.
So in 2025 we have moved everything in-house. Using Microsoft Copilot Studio, we created 36 curriculum-specific agents, one for each A Level subject, deployed directly inside Teams. These agents are connected to our SharePoint course resources, ensuring students and staff interact with AI in a trusted, institutionally managed environment.
Built-in Pedagogical Skills
Rather than thinking of these agents as simply ‘question answering machines’, we’ve tried to embed pedagogical skills that mirror what good teaching looks like. Each agent is structured around:
Explaining through metaphor and analogy – helping students access complex ideas in simple, relatable ways.
Prompting reflection – asking students to think aloud, reconsider, or connect their ideas.
Stretching higher-order thinking – moving beyond recall into analysis, synthesis, and evaluation.
Encouraging subject language use – reinforcing terminology in context.
Providing scaffolded progression – introducing concepts step by step, only deepening complexity as students respond.
Supporting responsible AI use – modelling ethical engagement and critical AI literacy.
These skills give the agents an educational texture. For example, if a sociology student asks: “What does patriarchy mean, but in normal terms?”, the agent won’t produce a dense definition. It will begin with a metaphor from everyday life, check understanding through a follow-up question, and then carefully layer in disciplinary concepts. The process is dialogic and recursive, echoing the scaffolding teachers already use in classrooms.
The Case for Copilot
We’re well aware that Microsoft Copilot Studio wasn’t designed as a pedagogical platform. It comes from the world of Power Automate, not the classroom. In many ways we’re “hijacking” it for our purposes. But it works.
The technical model is efficient: one Copilot Studio authoring licence, no full Copilot licences required, and all interactions handled through Teams chat. Data stays in tenancy, governed by our 365 permissions. It’s simple, secure, and scalable.
And crucially, it has allowed us to position AI as a learning partner, not a replacement for teaching. Our mantra remains: pedagogy first, technology second.
Lessons Learned So Far
From our pilots, a few lessons stand out:
Moving to an in-tenancy model was essential for trust.
Pedagogy must remain the driver – we want meaningful learning conversations, not shortcuts to answers.
Expectations must be realistic. Copilot Studio has clear limitations, especially in STEM contexts where dialogue is weaker.
AI integration is as much about culture, training, and mindset as it is about the underlying technology.
Looking Ahead
As we head into 2025–26, we’re expanding staff training, refining agent ‘skills’, and building metrics to assess impact. We know this is a long-haul project – five years at least – but it feels like the right direction.
The GenAI systems that students and teachers are often using in college were in the main designed mainly by engineers, developers, and commercial actors. What’s missing is the educator’s voice. Our work is about inserting that voice: shaping AI not just as a tool for efficiency, but as an ally for reflection, questioning, and deeper thinking.
The challenge is to keep students out of what I’ve called the ‘Cognitive Valley’, that place where understanding is lost because thinking has been short-circuited. Good pedagogical AI can help us avoid that.
We’re not there yet. Some results are excellent, others uneven. But the work is underway, and the potential is undeniable. The task now is to make GenAI fit our context, not the other way around.
Jonathan Sansom
Director of Digital Strategy, Hills Road Sixth Form College, Cambridge
Passionate about education, digital strategy in education, social and political perspectives on the purpose of learning, cultural change, wellbeing, group dynamics, – and the mysteries of creativity…
Greece is making a significant leap into the future of education by launching its “AI in Schools” program, introducing ChatGPT Edu into classrooms nationwide. This initiative aims to equip students with cutting-edge AI tools, fostering innovation and preparing them for a technology-driven world. Image (and typos) generated by Nano Banana.
Source
Greek Reporter
Summary
Greece has announced a nationwide initiative, AI in Schools, making it one of the first European countries to formally integrate generative AI into public education. Beginning with a pilot in December 2025, the programme will introduce ChatGPT Edu—OpenAI’s education-focused platform—into 20 high schools. Led by The Tipping Point in Education and funded by the Onassis Foundation, the initiative aims to enhance AI literacy among teachers and students while maintaining ethical standards and data privacy. The rollout includes four phases: teacher training, pilot implementation, student participation, and full integration by 2027. The Ministry of Education has established strict GDPR-compliant data protocols, ensuring that AI supports creativity, collaboration, and critical thinking without replacing teachers’ central role in learning.
Key Points
Greece will pilot ChatGPT Edu in 20 high schools from December 2025.
The project is run by The Tipping Point in Education and funded by the Onassis Foundation.
A four-phase rollout prioritises teacher training, student engagement, and responsible AI use.
ChatGPT Edu offers secure, ad-free, GDPR-compliant tools for schools.
The initiative promotes AI literacy, ethical awareness, and digital innovation.
Teachers remain central to guiding creative and critical classroom use of AI.
Empowering educators for the future: A new AI and assessment training initiative is equipping lecturers with the knowledge and tools to effectively integrate artificial intelligence into their evaluation strategies, enhancing teaching and learning outcomes. Image (and typos) generated by Nano Banana.
Source
North-West University News (South Africa)
Summary
North-West University (NWU) has launched a large-scale professional development initiative to promote responsible use of artificial intelligence in teaching, learning, and assessment. The AI and Assessment course, supported by the Senior Deputy Vice-Chancellor for Teaching and Learning, the AI Hub, and the Centre for Teaching and Learning, awarded R500 Takealot vouchers to the first 800 lecturers who completed all eleven modules. Participants earned fifteen digital badges by achieving over 80 per cent in assessments and submitting a portfolio of evidence. The initiative underscores NWU’s commitment to digital transformation and capacity building. Lecturers praised the programme for strengthening their understanding of ethical and effective AI integration in higher education.
Key Points
800 NWU lecturers were incentivised to complete the AI and Assessment training course.
The programme awarded fifteen digital badges for verified completion and assessment success.
Leadership highlighted AI’s transformative role in teaching and learning innovation.
Participants reported improved confidence in using AI tools responsibly and ethically.
The initiative reinforces NWU’s institutional focus on digital capability and staff development.