The Transformative Power of Communities of Practice in AI Upskilling for Educators

By Bernie Goldbach, RUN EU SAP Lead
Estimated reading time: 5 minutes
A diverse group of five educators collaboratively studying a glowing, holographic network of digital lines and nodes on a table, symbolizing their shared learning and upskilling in Artificial Intelligence (AI) within a modern, book-lined academic setting. Image (and typos) generated by Nano Banana.
The power of collaboration: Communities of Practice are essential for educators to collectively navigate and integrate new AI technologies, transforming teaching and learning through shared knowledge and support. Image (and typos) generated by Nano Banana.

When the N-TUTORR programme ended in Ireland, I remained seated in the main Edtech25 auditorium to hear some of the final conversations by key players. They stood at a remarkable intersection of professional development and technological innovation. And some of them issued a call to action for continued conversation, perhaps engaging with generative AI tools within a Community of Practice (CoP).

Throughout my 40 year teaching career, I have walked pathways to genuine job satisfaction that extended far beyond simple skill acquisition. In my specific case, this satisfaction emerged from the synergy between collaborative learning, pedagogical innovation, and an excitement that the uncharted territory is unfolding alongside peers who share their commitment to educational excellence.

Finding Professional Fulfillment Through Shared Learning

The journey of upskilling in generative AI feels overwhelming when undertaken in isolation. I am still looking for a structured CoP for Generativism in Education. This would be a rich vein of collective discovery. At the moment, I have three colleagues who help me develop my skills with ethical and sustainable use of AI.

Ethan Mollick, whose research at the Wharton School has illuminated the practical applications of AI in educational contexts, consistently emphasises that the most effective learning about AI tools happens through shared experimentation and peer discussion. His work demonstrates that educators who engage collaboratively with AI technologies develop more sophisticated mental models of how these tools can enhance rather than replace pedagogical expertise. This collaborative approach alleviates the anxiety many educators feel about technological change, replacing it with curiosity and professional confidence.

Mairéad Pratschke, whose work emphasises the importance of collaborative professional learning, has highlighted how communities create safe spaces where educators can experiment, fail, and succeed together without judgment. This psychological safety becomes the foundation upon which genuine professional growth occurs.

Frances O’Donnell, whose insights at major conferences have become invaluable resources for educators navigating the AI landscape, directs the most effective AI workshops I have attended. O’Donnell’s hands-on training at conferences such as CESI (https://www.cesi.ie), EDULEARN (https://iceri.org), ILTA (https://ilta.ie), and Online Educa Berlin (https://oeb.global) have illuminated the engaging features of instructional design that emerge when educators thoughtfully integrate AI tools. Her instructional design frameworks demonstrate how AI can support the creation of personalised learning pathways, adaptive assessments, and multimodal content that engages diverse learners. O’Donnell’s emphasis on the human element in AI-assisted design resonates deeply with Communities of Practice

And thanks to Frances O’Donnell, I discovered the AI assistants inside H5P.

Elevating Instructional Design Through AI-Assisted Tools

The quality of instructional design, personified by clever educators, represents the most significant leap I have made when combining AI tools with collaborative professional learning. The commercial version of H5P (https://h5p.com) has revolutionised my workflow when creating interactive educational content. The smart import feature of H5P.com complements my teaching practice. I can quickly design rich, engaging learning experiences that would previously have required specialised technical skills or significant time investments. I have discovered ways to create everything from interactive videos with embedded questions to gamified quizzes and sophisticated branching scenarios.

I hope I find a CoP in Ireland that is interested in several of the H5P workflows I have adopted. For the moment, I’m revealing these remarkable capabilities while meeting people at education events in Belgium, Spain, Portugal, and the Netherlands. It feels like I’m a town crier who has a notebook full of shared templates. I want to offer links to the interactive content that I have created with H5P AI and gain feedback from interested colleagues. But more than the conversations at the conferences, I’m interested in making real connections with educators who want to actively participate in vibrant online communities where sustained professional learning continues.

Sustaining Innovation with Community

Job satisfaction among educators has always been closely tied to their sense of efficacy and their ability to make meaningful impacts on student learning. Communities of Practice focused on AI upskilling amplify this satisfaction by creating networks of mutual support where members celebrate innovations, troubleshoot challenges, and collectively develop best practices. When an educator discovers an effective way to use AI for differentiation or assessment design, sharing that discovery with colleagues who understand the pedagogical context creates a profound sense of professional contribution.

These communities also combat the professional tension that currently faces proficient AI users. Mollick’s observations about blowback against widespread AI adoption in education reveal a critical imperative to stand together with a network that validates the quality of teaching and provides constructive feedback. When sharing with a community, individual risk-taking morphs into collective innovation, making the professional development experience inherently more satisfying and sustainable.

We need the spark of N-TUTORR inside an AI-focused Community of Practice. We need to amplify voices. Together we need to become confident navigators of innovation. We need to co-create contextually appropriate pedagogical approaches that effectively leverage AI in education.


Keywords


AI Could Revolutionise Higher Education in a Way We Did Not Expect

by Brian Mulligan – e-learning consultant with Universal Learning Systems (ulsystems.com)
Estimated reading time: 5 minutes
grand, expansive, and ornate university library or academic hall with high ceilings and classical architecture. In the center, a towering, swirling helix of glowing blue digital data, code, books, and educational icons rises dramatically, representing the transformative power of AI. Around the hall, students are seated at tables with glowing laptops, and many more students are walking and interacting. Holographic projections of famous busts and academic figures are subtly integrated into the scene. The entire environment is infused with a futuristic, digital glow. Image (and typos) generated by Nano Banana.
Artificial intelligence is poised to unleash a revolution in higher education, not in the ways we’ve conventionally imagined, but through unexpected and profound transformations. This image visualises AI as a central, dynamic force reshaping academic landscapes, curriculum delivery, and the very nature of learning in universities. Image (and typos) generated by Nano Banana.

The current conversation about Artificial Intelligence (AI) in higher education primarily focuses on efficiency and impact. People talk about how AI can personalise learning, streamline administrative tasks, and help colleges “do more with less.” For decades, every new technology, from online training to MOOCs, promised a similar transformation. Generative AI certainly offers powerful tools to enhance existing processes.

However, perhaps the revolutionary potential of AI in higher education may come from a more critical and urgent pressure: its significant challenge to the integrity of academic credentials and the learning processes they are supposed to represent.

Historically, colleges haven’t had a strong incentive to completely overhaul their teaching models just because new technology arrived. Traditional lectures, established assessment methods, and the value of a physical campus have remained largely entrenched. Technology usually just served to augment existing practices, not to transform the underlying structures of teaching, learning, and accreditation.

AI, however, may be a different kind of catalyst for change.

The Integrity Challenge

AI’s ability to create human-quality text, solve complex problems, and produce creative outputs has presented a serious challenge to academic integrity. Reports show a significant rise in AI-driven cheating, with many students now routinely using these tools to complete their coursework. For a growing number of students, offloading cognitive labour, from summarising readings to generating entire essays, to AI is becoming the new norm.

This widespread and mostly undetectable cheating compromises the entire purpose of assessment: to verify genuine learning and award credible qualifications. Even students committed to authentic learning feel compromised, forced to compete against peers using AI for an unfair advantage.

Crucially, even when AI use is approved, there’s a legitimate concern that it can undermine the learning process itself. If students rely on AI for foundational tasks like summarisation and idea generation, they may bypass the essential cognitive engagement and critical thinking development. This reliance can lead to intellectual laziness, meaning the credentials universities bestow may no longer reliably signify genuine knowledge and skills. This creates an urgent imperative for institutions to act.

The Shift to Authentic Learning

While many believe we can address this just by redesigning assignments, the challenge offers, and may even require, a structural shift towards more radical educational models. These new approaches,which have been emerging to address the challenges of quality, access and cost, may also prove to be the most effective ways of addressing academic integrity challenges.

To illustrate the point, let’s look at three examples of such emerging models:

  1. Flipped Learning: Students engage with core content independently online. Valuable in-person time is then dedicated to active learning like problem-solving, discussions, and collaborative projects. Educators can directly observe the application of knowledge, allowing for a more authentic assessment of understanding.
  2. Project-Based Learning (PBL): Often seen as an integrated flipped model, PBL immerses students in complex, integrated projects over extended periods. The focus is on applying knowledge from multiple modules and independent research to solve real-world problems. These projects demand sustained, supervised engagement, creative synthesis, and complex problem-solving, capabilities that are very hard to simply outsource to AI.
  3. Work-Based Learning (WBL): A significant part of the student’s journey takes place in authentic workplace settings. The emphasis shifts entirely to the demonstrable application of skills and knowledge in genuine professional contexts, a feat AI alone cannot achieve. Assessment moves to evaluating how a student performs and reflects in their role, including how they effectively and ethically integrate AI tools professionally.

AI as the Enabler of Change

Shifting to these models isn’t easy. Can institutions afford the resources to develop rich content, intricate project designs, and robust supervisory frameworks? Creating and assessing numerous, varied, and authentic tasks requires significant time and financial investment.

This is where technology, now including AI itself, becomes the key enabler for the feasibility of these new pedagogical approaches. Learning technologies, intelligently deployed, can help by:

  • Affordably Creating Content: AI tools rapidly develop diverse learning materials, including texts, videos and formative quizzes as well as more sophisticated assessment designs.
  • Providing Automated Learning Support: AI-powered tutors and chatbots offer 24/7 support, guiding students through challenging material, which personalises the learning journey.
  • Monitoring Independent Work: Learning analytics, enhanced by AI, track student engagement and flag struggling individuals. This allows educators to provide timely, targeted human intervention.
  • Easing the Assessment Burden: Technology can streamline the heavy workload associated with more varied assignments. Simple digital tools like structured rubrics and templated feedback systems free up educator time for nuanced, human guidance.

In summary, the most significant impact of AI isn’t the familiar promise of doing things better or faster. By undermining traditional methods of learning verification through the ease of academic dishonesty, AI has created an unavoidable pressure for systemic change. It forces colleges to reconsider what they are assessing and what value their degrees truly represent.

It’s that AI, by challenging the old system so thoroughly, makes the redesign of higher education a critical necessity.

Brian Mulligan

E-learning Consultant
Universal Learning Systems (ulsystems.com)

Brian Mulligan is an e-learning consultant with Universal Learning Systems (ulsystems.com) having retired as Head of Online Learning Innovation at Atlantic Technological University in Sligo in 2022. His current interests include innovative models of higher education and the strategic use of learning technologies in higher education.


Keywords


Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords