The Transformative Power of Communities of Practice in AI Upskilling for Educators

By Bernie Goldbach, RUN EU SAP Lead
Estimated reading time: 5 minutes
A diverse group of five educators collaboratively studying a glowing, holographic network of digital lines and nodes on a table, symbolizing their shared learning and upskilling in Artificial Intelligence (AI) within a modern, book-lined academic setting. Image (and typos) generated by Nano Banana.
The power of collaboration: Communities of Practice are essential for educators to collectively navigate and integrate new AI technologies, transforming teaching and learning through shared knowledge and support. Image (and typos) generated by Nano Banana.

When the N-TUTORR programme ended in Ireland, I remained seated in the main Edtech25 auditorium to hear some of the final conversations by key players. They stood at a remarkable intersection of professional development and technological innovation. And some of them issued a call to action for continued conversation, perhaps engaging with generative AI tools within a Community of Practice (CoP).

Throughout my 40 year teaching career, I have walked pathways to genuine job satisfaction that extended far beyond simple skill acquisition. In my specific case, this satisfaction emerged from the synergy between collaborative learning, pedagogical innovation, and an excitement that the uncharted territory is unfolding alongside peers who share their commitment to educational excellence.

Finding Professional Fulfillment Through Shared Learning

The journey of upskilling in generative AI feels overwhelming when undertaken in isolation. I am still looking for a structured CoP for Generativism in Education. This would be a rich vein of collective discovery. At the moment, I have three colleagues who help me develop my skills with ethical and sustainable use of AI.

Ethan Mollick, whose research at the Wharton School has illuminated the practical applications of AI in educational contexts, consistently emphasises that the most effective learning about AI tools happens through shared experimentation and peer discussion. His work demonstrates that educators who engage collaboratively with AI technologies develop more sophisticated mental models of how these tools can enhance rather than replace pedagogical expertise. This collaborative approach alleviates the anxiety many educators feel about technological change, replacing it with curiosity and professional confidence.

Mairéad Pratschke, whose work emphasises the importance of collaborative professional learning, has highlighted how communities create safe spaces where educators can experiment, fail, and succeed together without judgment. This psychological safety becomes the foundation upon which genuine professional growth occurs.

Frances O’Donnell, whose insights at major conferences have become invaluable resources for educators navigating the AI landscape, directs the most effective AI workshops I have attended. O’Donnell’s hands-on training at conferences such as CESI (https://www.cesi.ie), EDULEARN (https://iceri.org), ILTA (https://ilta.ie), and Online Educa Berlin (https://oeb.global) have illuminated the engaging features of instructional design that emerge when educators thoughtfully integrate AI tools. Her instructional design frameworks demonstrate how AI can support the creation of personalised learning pathways, adaptive assessments, and multimodal content that engages diverse learners. O’Donnell’s emphasis on the human element in AI-assisted design resonates deeply with Communities of Practice

And thanks to Frances O’Donnell, I discovered the AI assistants inside H5P.

Elevating Instructional Design Through AI-Assisted Tools

The quality of instructional design, personified by clever educators, represents the most significant leap I have made when combining AI tools with collaborative professional learning. The commercial version of H5P (https://h5p.com) has revolutionised my workflow when creating interactive educational content. The smart import feature of H5P.com complements my teaching practice. I can quickly design rich, engaging learning experiences that would previously have required specialised technical skills or significant time investments. I have discovered ways to create everything from interactive videos with embedded questions to gamified quizzes and sophisticated branching scenarios.

I hope I find a CoP in Ireland that is interested in several of the H5P workflows I have adopted. For the moment, I’m revealing these remarkable capabilities while meeting people at education events in Belgium, Spain, Portugal, and the Netherlands. It feels like I’m a town crier who has a notebook full of shared templates. I want to offer links to the interactive content that I have created with H5P AI and gain feedback from interested colleagues. But more than the conversations at the conferences, I’m interested in making real connections with educators who want to actively participate in vibrant online communities where sustained professional learning continues.

Sustaining Innovation with Community

Job satisfaction among educators has always been closely tied to their sense of efficacy and their ability to make meaningful impacts on student learning. Communities of Practice focused on AI upskilling amplify this satisfaction by creating networks of mutual support where members celebrate innovations, troubleshoot challenges, and collectively develop best practices. When an educator discovers an effective way to use AI for differentiation or assessment design, sharing that discovery with colleagues who understand the pedagogical context creates a profound sense of professional contribution.

These communities also combat the professional tension that currently faces proficient AI users. Mollick’s observations about blowback against widespread AI adoption in education reveal a critical imperative to stand together with a network that validates the quality of teaching and provides constructive feedback. When sharing with a community, individual risk-taking morphs into collective innovation, making the professional development experience inherently more satisfying and sustainable.

We need the spark of N-TUTORR inside an AI-focused Community of Practice. We need to amplify voices. Together we need to become confident navigators of innovation. We need to co-create contextually appropriate pedagogical approaches that effectively leverage AI in education.


Keywords


This is not the end but a beginning: Responding to “Something Wicked This Way Comes”

By Kerith George-Briant and Jack Hogan, Abertay University Dundee
Estimated reading time: 5 minutes
A conceptual illustration showing a digital roadmap splitting into two distinct, glowing paths, one labeled "Secure Assessment" and the other "Open Assessment." The background blends subtle academic motifs with swirling binary code, symbolizing the strategic integration of Generative AI into higher education assessment practices. Image (and typos) generated by Nano Banana.
Navigating the future: The “Two-Lane Approach” to Generative AI in assessment—balancing secure testing of threshold concepts (Lane 1) with open collaboration for developing AI literacy and critical thinking (Lane 2). Image (and typos) generated by Nano Banana.

O’Mahony’s provocatively titled “Something Wicked This Way Comes” blog outlined feelings we recognised from across the sector, which were that Generative AI (GenAI) tools have created unease, disruption, and uncertainty. In addition, we felt that GenAI provided huge opportunities, and as higher education has led and celebrated innovation in all disciplines over centuries, how this translated into our assessment practices intrigued us. 

At Abertay University, we’ve been exploring the “wicked problem” of whether to change teaching practices through a small-scale research project entitled “Lane Change Ahead: Artificial Intelligence’s Impact on Assessment Practices.” Our findings agree with O’Mahony’s observations that while GenAI does pose a challenge to academic integrity and traditional assessment models, it also offers opportunities for innovation, equity, and deeper learning, but we must respond thoughtfully and acknowledge that there are a variety of views on GenAI.

Academic Sensemaking

To understand colleagues’ perspectives and experiences, we applied Degn’s (2016) concept of academic sensemaking to understand how the colleagues we interviewed felt about GenAI. Findings showed that some assessment designers are decoupling, designing assessments that use GenAI outputs without requiring students to engage with the tools. Others are defiant or defeatist, allowing limited collaboration with GenAI tools but awarding a low percentage of the grade to that output. And some are strategic and optimistic, embracing GenAI as a tool for learning, creativity, and employability.

The responses show the reasons for unease are not just pedagogical; they’re deeply personal. GenAI challenges academic identity. Recognising this emotional response is essential to supporting staff if change is needed.

Detection and the Blurred Line

And change is needed, we would argue. Back in 2023, Perkin et al’s analysis of Turnitin’s AI detection capabilities revealed that while 91% of fully AI-generated submissions were flagged, the average detection within each paper was only 54.8% and only half of those flagged papers would have been referred for academic misconduct. Similar studies since then have continued to show the same types of results. And if detection isn’t possible, setting an absurd line as referred to by Corbin et al is ever more incongruous. There is no reliable way to indicate whether a student has stopped at the point of using AI for brainstorming or has engaged critically with AI paraphrased output. Some may read this and think that it’s game over, however if we embrace these challenges and adapt our approaches, we find solutions that are fit for purpose.

From Fear to Framework: The Two-Lane Approach

So, what is the solution? Our research explored whether the two-lane approach developed by Liu and Bridgeman would work at Abertay, where:

  • Lane 1: Secure Assessments would be conducted under controlled conditions to assure learning of threshold concepts and
  • Lane 2: Open Assessments would allow unrestricted use of GenAI.

Our case studies revealed three distinct modes of GenAI integration:

  • AI Output Only – Students critiqued AI-generated content without using GenAI themselves. This aligned with Lane 1 and a secure assessment method focusing on threshold concepts.
  • Limited Collaboration – Students used GenAI for planning and a minimal piece of output within a larger piece of assessment, which did not allow GenAI use. Students developed some critical thinking, but weren’t able to apply this learning to the whole assessment.
  • Unlimited Collaboration – Students were fully engaged with GenAI, with reflection and justification built into the assessment. Assessment designers reporting that students produced higher quality work and demonstrated enhanced critical thinking.

Each mode reflected a different balance of trust, control, and pedagogical intent. Interestingly, the AI Output pieces were secure and used to build AI literacy while meeting PSRB requirements, which asked for certain competencies and skills to be tested. The limited collaboration had an element of open assessment, but the percentage of the grade awarded to the output was minimal, and an absurd line was created by asking for no AI use in the larger part of the assessment. Finally, the assessments with unlimited collaboration were designed because those colleagues believed that writing without GenAI was not authentic, and they believed that employers would expect AI literacy skills, perhaps not misplaced based on the figure given in O’Mahony’s blog.

Reframing the Narrative: GenAI as Opportunity

We see the need to treat GenAI as a partner in education, one that encourages critical reflection. This will require carefully scaffolded teaching activities to develop the AI literacy of students and avoid cognitive offloading. Thankfully, ways forward have begun to appear, as noted in the work of Gerlick and Jose et al.

Conclusion: From Wicked to ‘Witch’ lane?

As educators, we have a choice. We can resist, decouple from GenAI or we can choose to lead the narrative strategically and optimistically. Although the pathway forward may not be a yellow brick road, we believe it’s worth considering which lane may suit us best. The key is that we don’t do this in isolation, but we take a pragmatic approach across our entire degree programme considering the level of study and the appropriate AI literacy skills.

GenAI acknowledgement:
Microsoft Copilot (https://copilot.microsoft.com) – used to create a draft blog from our research paper.

Kerith George-Briant

Learner Development Manager
Abertay University

Kerith George-Briant manages the Learner Development Service at Abertay. Her key interests are in building best practices in using AI, inclusivity, and accessibility.

Jack Hogan

Lecturer in Academic Practice
Abertay University

Jack Hogan works within the Abertay Learning Enhancement (AbLE) Academy as a Lecturer in Academic Practice. His research interests include student transitions and the first-year experience, microcredentials, skills development and employability. 


Keywords


AI Could Revolutionise Higher Education in a Way We Did Not Expect

by Brian Mulligan – e-learning consultant with Universal Learning Systems (ulsystems.com)
Estimated reading time: 5 minutes
grand, expansive, and ornate university library or academic hall with high ceilings and classical architecture. In the center, a towering, swirling helix of glowing blue digital data, code, books, and educational icons rises dramatically, representing the transformative power of AI. Around the hall, students are seated at tables with glowing laptops, and many more students are walking and interacting. Holographic projections of famous busts and academic figures are subtly integrated into the scene. The entire environment is infused with a futuristic, digital glow. Image (and typos) generated by Nano Banana.
Artificial intelligence is poised to unleash a revolution in higher education, not in the ways we’ve conventionally imagined, but through unexpected and profound transformations. This image visualises AI as a central, dynamic force reshaping academic landscapes, curriculum delivery, and the very nature of learning in universities. Image (and typos) generated by Nano Banana.

The current conversation about Artificial Intelligence (AI) in higher education primarily focuses on efficiency and impact. People talk about how AI can personalise learning, streamline administrative tasks, and help colleges “do more with less.” For decades, every new technology, from online training to MOOCs, promised a similar transformation. Generative AI certainly offers powerful tools to enhance existing processes.

However, perhaps the revolutionary potential of AI in higher education may come from a more critical and urgent pressure: its significant challenge to the integrity of academic credentials and the learning processes they are supposed to represent.

Historically, colleges haven’t had a strong incentive to completely overhaul their teaching models just because new technology arrived. Traditional lectures, established assessment methods, and the value of a physical campus have remained largely entrenched. Technology usually just served to augment existing practices, not to transform the underlying structures of teaching, learning, and accreditation.

AI, however, may be a different kind of catalyst for change.

The Integrity Challenge

AI’s ability to create human-quality text, solve complex problems, and produce creative outputs has presented a serious challenge to academic integrity. Reports show a significant rise in AI-driven cheating, with many students now routinely using these tools to complete their coursework. For a growing number of students, offloading cognitive labour, from summarising readings to generating entire essays, to AI is becoming the new norm.

This widespread and mostly undetectable cheating compromises the entire purpose of assessment: to verify genuine learning and award credible qualifications. Even students committed to authentic learning feel compromised, forced to compete against peers using AI for an unfair advantage.

Crucially, even when AI use is approved, there’s a legitimate concern that it can undermine the learning process itself. If students rely on AI for foundational tasks like summarisation and idea generation, they may bypass the essential cognitive engagement and critical thinking development. This reliance can lead to intellectual laziness, meaning the credentials universities bestow may no longer reliably signify genuine knowledge and skills. This creates an urgent imperative for institutions to act.

The Shift to Authentic Learning

While many believe we can address this just by redesigning assignments, the challenge offers, and may even require, a structural shift towards more radical educational models. These new approaches,which have been emerging to address the challenges of quality, access and cost, may also prove to be the most effective ways of addressing academic integrity challenges.

To illustrate the point, let’s look at three examples of such emerging models:

  1. Flipped Learning: Students engage with core content independently online. Valuable in-person time is then dedicated to active learning like problem-solving, discussions, and collaborative projects. Educators can directly observe the application of knowledge, allowing for a more authentic assessment of understanding.
  2. Project-Based Learning (PBL): Often seen as an integrated flipped model, PBL immerses students in complex, integrated projects over extended periods. The focus is on applying knowledge from multiple modules and independent research to solve real-world problems. These projects demand sustained, supervised engagement, creative synthesis, and complex problem-solving, capabilities that are very hard to simply outsource to AI.
  3. Work-Based Learning (WBL): A significant part of the student’s journey takes place in authentic workplace settings. The emphasis shifts entirely to the demonstrable application of skills and knowledge in genuine professional contexts, a feat AI alone cannot achieve. Assessment moves to evaluating how a student performs and reflects in their role, including how they effectively and ethically integrate AI tools professionally.

AI as the Enabler of Change

Shifting to these models isn’t easy. Can institutions afford the resources to develop rich content, intricate project designs, and robust supervisory frameworks? Creating and assessing numerous, varied, and authentic tasks requires significant time and financial investment.

This is where technology, now including AI itself, becomes the key enabler for the feasibility of these new pedagogical approaches. Learning technologies, intelligently deployed, can help by:

  • Affordably Creating Content: AI tools rapidly develop diverse learning materials, including texts, videos and formative quizzes as well as more sophisticated assessment designs.
  • Providing Automated Learning Support: AI-powered tutors and chatbots offer 24/7 support, guiding students through challenging material, which personalises the learning journey.
  • Monitoring Independent Work: Learning analytics, enhanced by AI, track student engagement and flag struggling individuals. This allows educators to provide timely, targeted human intervention.
  • Easing the Assessment Burden: Technology can streamline the heavy workload associated with more varied assignments. Simple digital tools like structured rubrics and templated feedback systems free up educator time for nuanced, human guidance.

In summary, the most significant impact of AI isn’t the familiar promise of doing things better or faster. By undermining traditional methods of learning verification through the ease of academic dishonesty, AI has created an unavoidable pressure for systemic change. It forces colleges to reconsider what they are assessing and what value their degrees truly represent.

It’s that AI, by challenging the old system so thoroughly, makes the redesign of higher education a critical necessity.

Brian Mulligan

E-learning Consultant
Universal Learning Systems (ulsystems.com)

Brian Mulligan is an e-learning consultant with Universal Learning Systems (ulsystems.com) having retired as Head of Online Learning Innovation at Atlantic Technological University in Sligo in 2022. His current interests include innovative models of higher education and the strategic use of learning technologies in higher education.


Keywords


Dr. Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

by Tadhg Blommerde – Assistant Professor, Northumbria University
Estimated reading time: 5 minutes
A stylized image featuring a character resembling Doctor Strange, dressed in his iconic attire, standing in a magical classroom setting. He holds up a glowing scroll labeled "SYLLABUS." In the foreground, two students (one Hispanic, one Black) are seated at a table, working on laptops that display a red 'X' over an AI-like interface, symbolizing mistrust of AI. Above Doctor Strange, a glowing, menacing AI entity with red eyes and outstretched arms hovers, presenting a digital screen, representing the seductive but potentially harmful nature of AI. Magical, glowing runes, symbols, and light effects fill the air around the students and the central figure, illustrating complex learning. Image (and typos) generated by Nano Banana.
In an era dominated by AI, educators are finding innovative ways to guide students. This image, inspired by a “Dr. Strange-Syllabus,” represents a pedagogical approach focused on fostering self-reliance and critical thinking, helping students to navigate the complexities of AI and ultimately trust their own capabilities. Image (and typos) generated by Nano Banana.

There is a scene I have witnessed many times in my classroom over the last couple of years. A question is posed, and before the silence has a chance to settle and spark a thought, a hand shoots up. The student confidently provides an answer, not from their own reasoning, but read directly from a glowing phone or laptop screen. Sometimes the answer is wrong and other times it is plausible but subtly wrong, lacking the specific context of our course materials. Almost always the reasoning behind the answer cannot be satisfactorily explained. This is the modern classroom reality. Students arrive with generative AI already deeply embedded in their personal lives and academic processes, viewing it not as a tool, but as a magic machine, an infallible oracle. Their initial relationship with it is one of unquestioning trust.

The Illusion of the All-Knowing Machine

Attempting to ban this technology would be a futile gesture. Instead, the purpose of my teaching became to deliberately make students more critical and reflective users of it. At the start of my module, their overreliance is palpable. They view AI as an all-knowing friend, a collaborator that can replace the hard work of thinking and writing. In the early weeks, this manifests as a flurry of incorrect answers shouted out in class, the product of poorly constructed prompts fed into (exclusively) ChatGPT, and a complete faith in the response it generated. It was clear there was a dual deficit: a lack of foundational knowledge on the topic, and a complete absence of critical engagement with the AI’s output.

Remedying this begins not with a single ‘aha!’ moment, but through a cumulative, twelve-week process of structured exploration. I introduce a prompt engineering and critical analysis framework that guides students through writing more effective prompts and critically engaging with AI output. We move beyond simple questions and answers. I task them with having AI produce complex academic work, such as literature reviews and research proposals, which they would then systematically interrogate. Their task is to question everything. Does the output actually adhere to the instructions in the prompt? Can every claim and statement be verified with a credible, existing source? Are there hidden biases or a leading tone that misrepresents the topic or their own perspective?

Pulling Back the Curtain on AI

As they began this work, the curtain was pulled back on the ‘magic’ machine. Students quickly discovered the emperor had no clothes. They found AI-generated literature reviews cited non-existent sources or completely misrepresented the findings of real academic papers. They critiqued research proposals that suggested baffling methodologies, like using long-form interviews in a positivist study. This process forced them to rely on their own developing knowledge of module materials to spot the flaws. They also began to critique the writing itself, noting that the prose was often excessively long-winded, failed to make points succinctly, and felt bland. A common refrain was that it simply ‘didn’t sound like them’. They came to realise that AI, being sycophantic by design, could not provide the truly critical feedback necessary for their intellectual or personal growth.

This practical work was paired with broader conversations about the ethics of AI, from its significant environmental impact to the copyrighted material used in its training. Many students began to recognise their own over-dependence, reporting a loss of skills when starting assignments and a profound lack of satisfaction in their work when they felt they had overused this technology. Their use of the technology began to shift. Instead of a replacement for their own intellect, it became a device to enhance it. For many, this new-found scepticism extended beyond the classroom. Some students mentioned they were now more critical of content they encountered on social media, understanding how easily inaccurate or misleading information could be generated and spread. The module was fostering not just AI literacy, but a broader media literacy.

From Blind Trust to Critical Confidence

What this experience has taught me is that student overreliance on AI is often driven by a lack of confidence in their own abilities. By bringing the technology into the open and teaching them to expose its limitations, we do more than just create responsible users. We empower them to believe in their own knowledge and their own voice. They now see AI for what it is: not an oracle, but a tool with serious shortcomings. It has no common sense and cannot replace their thinking. In an educational landscape where AI is not going anywhere, our greatest task is not to fear it, but to use it as a powerful instrument for teaching the very skills it threatens to erode: critical inquiry, intellectual self-reliance, and academic integrity.

Tadhg Blommerde

Assistant Professor
Northumbria University

Tadhg is a lecturer (programme and module leader) and researcher that is proficient in quantitative and qualitative social science techniques and methods. His research to date has been published in Journal of Business Research, The Service Industries Journal, and European Journal of Business and Management Research. Presently, he holds dual roles and is an Assistant Professor (Senior Lecturer) in Entrepreneurship at Northumbria University and an MSc dissertation supervisor at Oxford Brookes University.

His interests include innovation management; the impact of new technologies on learning, teaching, and assessment in higher education; service development and design; business process modelling; statistics and structural equation modelling; and the practical application and dissemination of research.


Keywords


AI Adoption & Education for SMEs

by Patrick Shields – AI PhD Researcher, Munster Technological University
Estimated reading time: 5 minutes
A female presenter stands at the head of a modern conference table, gesturing towards a large screen that displays a flowchart titled "AI ADOPTION & EDUCATION FOR SMEs." The flowchart outlines steps like "Identify Needs," "AI Tools," "Implement," and "Growth," with various icons representing different business functions. Around the table, a diverse group of seven professionals is seated, actively engaged with laptops. Holographic data visualizations and icons float above their desks, symbolizing the integration of AI. A humanoid robot stands in the background to the right, emphasizing the AI theme. The room has large windows overlooking a city, and bookshelves line the walls. Image (and typos) generated by Nano Banana.
Small and Medium-sized Enterprises (SMEs) are increasingly looking to leverage AI, but successful adoption requires proper education and strategic integration. This image represents the crucial need for training and understanding to empower SMEs to harness AI for business growth and innovation. Image (and typos) generated by Nano Banana.

Aligning National AI Goals With Local Business Realities

As third-level institutions launch AI courses across multiple disciplines this semester, there is a unique opportunity to support an essential business cohort in this country: the small to medium-sized enterprise (SMEs). In Ireland, SMEs account for over 99% of all businesses according to the Central Statistics Office. They also happen to be struggling in AI adoption in comparison to their multinational counterparts.

Recent research has outlined how SMEs are adopting AI in a piecemeal and fragmented fashion, with just 10% possessing any AI strategy at all. Not having a strategy may indicate an absence of policy, and therein lies a significant communication issue at the heart of the AI adoption challenge. Further insights describe how four out of five business leaders believe AI is being used within their companies with little to no guardrails. This presents a significant challenge to Ireland’s National AI Strategy, which was originally published in 2021 but has since been updated to include several initiatives, such as the objective of establishing an AI awareness campaign for SMEs. The Government recognises that to achieve the original goal of 75% of all businesses embracing AI by 2030, and all the investment that this will encourage, it will be essential to address the gap between SMEs and their multinational counterparts. Perhaps these endeavours can be supported at third-level, especially given the percentage of businesses that fall into the SME bracket and the demand for upskilling.

Turning AI Potential Into Practical Know-How

Having spent the summer months of 2025 meeting businesses as part of a Chamber of Commerce AI mentoring initiative in the South East of Ireland, I believe there is a significant education gap here in which third-level institutions could assist in meeting. It became clear that the business representatives that I spoke to had serious questions about how to properly commence and embrace their AI journeys. For them, it wasn’t about the technical element because many of their existing programs and applications were adding AI features and introducing new AI-enabled tools which they could easily access. The prominent issue was in managing the process of deploying the technology in a way that matched employee buy-in with integrated, sustained & appropriate usage for maximum benefit. They require frameworks and education to roll this out effectively.

A real-world story:

As I returned to my AI adoption PhD studies this Autumn, I had the pleasure of meeting a part-time student who was employed by a local tech company in Cork. He wished to share the story of an AI initiative his employer had embarked upon, which left him feeling anxious. The company had rolled out an AI meeting transcription tool to the surprise of its employees. There had been no prior communication about its deployment, and the tool was now in active use inside the organisation. This particular student felt that the AI was useful but had its limitations, such as, for example, not being able to identify meeting speakers on the AI-generated meeting transcripts. He had his doubts as to whether the tool would stay in use at his workplace in the future, and he had not received any policy documents related to its correct handling. He also stated that he was not aware if the organisation had an AI strategy, and the manner in which the technology had been integrated into daily operations had left him and his colleagues feeling quite uneasy. He felt that the team would have benefited enormously from some communication before and during the rollout. This same student was looking to commence a course in effective AI adoption and proclaimed his belief that the industry was crying out for more training and development in this area.

The above tale of potentially failing deployment is unfortunately not an isolated case. Reports in the US have shown that up to 95% of AI pilots are failing before they ever make it to full production inside organisations. There may be many complex reasons for this, but one must certainly be the lack of understanding of the cultural impact of such change on teams, compounded by many examples of inadequate communication. It appears to me that despite the global investment in technology and the genuine intention to embrace AI, organisations continue to struggle with the employee education aspect of this transformation. If employers will prioritise training and development in partnership with education providers, they may dramatically increase their chances of success. This may include the establishment of joint frameworks for AI deployment and management with educational courses aligned to emerging business needs. 

In adopting a people development approach, companies may not only improve the chances of AI pilot success but will foster trust, alignment and buy-in. Surely this is the real promise of AI, a better, brighter organisational future, starting this winter, where your greatest asset – your people, are not left completely out in the cold and supported by higher education.

Links

https://www.irishtimes.com/business/economy/2024/08/19/smes-account-for-998-of-all-businesses-in-ireland-and-employ-two-thirds-of-workforce/

https://www.tcd.ie/news_events/articles/2025/ai-expected-to-add-250bn-to-irelands-economy-by-2035

https://enterprise.gov.ie/en/publications/national-ai-strategy.html

https://enterprise.gov.ie/en/publications/national-ai-strategy-refresh-2024.html

https://www.forbes.com/sites/andreahill/2025/08/21/why-95-of-ai-pilots-fail-and-what-business-leaders-should-do-instead

Patrick Shields

AI PhD Researcher
Munster Technological University

I am a PhD Researcher at Munster Technological University, researching how small and medium-sized businesses adopt Artificial Intelligence with a particular focus on the human, strategic and organisational dynamics involved. My work looks beyond the technical layer, exploring how AI can be introduced in practical, low-friction ways that support real business outcomes.


Keywords