2025 Review – A Shared Conversation, Built Over Time

Estimated reading time: 9 minutes

From Individual Questions to Collective Practice

Since September, the GenAI:N3 blog has hosted a weekly series of reflections exploring what generative AI means for higher education: for teaching, learning, assessment, academic identity, and institutional responsibility. Early contributions captured a sector grappling with disruption, uncertainty, and unease, asking difficult questions about trust, integrity, creativity, and control at a moment when generative AI arrived faster than policy, pedagogy, or professional development could respond.

As the series unfolded, a clear shift began to emerge. Posts moved from individual reactions and early experimentation towards more structured sense-making, discipline-specific redesign, and crucially shared learning. The introduction of communities of practice as a deliberate strategy for AI upskilling marked a turning point in the conversation: from “How do I deal with this?” to “How do we learn, adapt, and shape this together?” Taken as a whole, the series traces that journey from disruption to agency, and from isolated responses to collective practice.

What makes this series distinctive is not simply its focus on generative AI, but the diversity of voices it brings together. Contributors include academic staff, professional staff, educational developers, students, and sector partners, each writing from their own context while engaging with a set of common challenges. The result is not a single narrative, but a constellation of perspectives that reflect the complexity of teaching and learning in an AI-shaped world.

29th September 2025 – Jim O’Mahony

Something Wicked This Way Comes

The GenAI:N3 blog series opens with a deliberately unsettling provocation, asking higher education to confront the unease, disruption, and uncertainty that generative AI has introduced into teaching, assessment, and academic identity. Rather than framing AI as either saviour or villain, this piece invites a more honest reckoning with fear, denial, and institutional inertia. It sets the tone for the series by arguing that ignoring GenAI is no longer an option; what matters now is how educators respond, individually and collectively, to a technology that has already crossed the threshold into everyday academic practice.

6th October 2025 – Dr Yannis

3 Things AI Can Do for You: The No-Nonsense Guide

Building directly on that initial unease, this post grounds the conversation in pragmatism. Stripping away hype and alarmism, it focuses on concrete, immediately useful ways AI can support academic work, from sense-making to productivity. The emphasis is not on replacement but augmentation, encouraging educators to experiment cautiously, critically, and with intent. In the arc of the series, this contribution marks a shift from fear to agency, demonstrating that engagement with AI can be practical, purposeful, and aligned with professional judgement.

13th October 2025 – Sue Beckingham & Peter Hartley

New Elephants in the Generative AI Room? Acknowledging the Costs of GenAI to Develop ‘Critical AI Literacy’

As confidence in experimentation grows, this post re-introduces necessary friction by surfacing the hidden costs of generative AI. Environmental impact, labour implications, equity, and ethical responsibility are brought into sharp focus, challenging overly simplistic narratives of efficiency and innovation. The authors remind readers that responsible adoption requires confronting uncomfortable trade-offs. Within the wider series, this piece deepens the discussion, insisting that values, sustainability, and social responsibility must sit alongside pedagogical opportunity.

20th October 2025 Jonathan Sansom

Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot Agents

Here the conversation turns toward structured sense-making. Drawing on strategic and pedagogical frameworks, this post explores how educators and institutions can move beyond reactive responses to more deliberate design choices. The idea of AI as a “copilot” rather than an autonomous actor reframes the relationship between teacher, learner, and technology. In the narrative of the series, this contribution offers conceptual tools for navigating complexity, helping readers connect experimentation with strategy.

27th October 2025 – Patrick Shields

AI Adoption & Education for SMEs

Widening the lens beyond universities, this post examines AI adoption through the perspective of small and medium-sized enterprises, highlighting the skills, mindsets, and educational approaches needed to support workforce readiness. The crossover between higher education, lifelong learning, and industry becomes explicit. This piece situates GenAI not just as an academic concern, but as a societal one, reinforcing the importance of education systems that are responsive, connected, and outward-looking.

3rd November 2025 – Tadhg Blommerde

Dr Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust Themselves

Returning firmly to the classroom, this reflective account explores what happens when students are encouraged to engage critically with AI rather than rely on it unthinkingly. Through curriculum design and assessment choices, learners begin to question outputs, assert their own judgement, and reclaim intellectual agency. This post is a turning point in the series, showing how thoughtful pedagogy can transform AI from a threat to academic integrity into a catalyst for deeper learning.

10th November 2025 – Brian Mulligan

AI Could Revolutionise Higher Education in a Way We Did Not Expect

This contribution steps back to consider second-order effects, arguing that the most significant impact of AI may not be efficiency or automation, but a reconfiguration of how learning, expertise, and value are understood. It challenges institutions to think beyond surface-level policy responses and to anticipate longer-term cultural shifts. Positioned mid-series, the post broadens the horizon, encouraging readers to think systemically rather than tactically.

17th November 2025 – Kerith George-Briant & Jack Hogan

This Is Not the End but a Beginning: Responding to “Something Wicked This Way Comes”

Explicitly dialoguing with the opening post, this response reframes the initial sense of threat as a starting point rather than a conclusion. The authors emphasise community, dialogue, and shared responsibility, arguing that collective reflection is essential if higher education is to navigate GenAI well. This piece reinforces one of the central through-lines of the series: that no single institution or individual has all the answers, but progress is possible through collaboration.

24th November 2025 – Bernie Goldbach

The Transformative Power of Communities of Practice in AI Upskilling for Educators

This post makes the case that the most sustainable way to build AI capability in education is not through one-off training sessions, but through communities of practice that support ongoing learning, experimentation, and shared problem-solving. It highlights how peer-to-peer dialogue helps educators move from cautious curiosity to confident, critical use of tools, while also creating space to discuss ethics, assessment, and evolving norms without judgement. Positioned within the blog series, it serves as a bridge between individual experimentation and institutional change: a reminder that AI upskilling is fundamentally social, and that collective learning structures are one of the best defences against both hype and paralysis.

1st December 2025 – Hazel Farrell et al.

Teaching the Future: How Tomorrow’s Music Educators Are Reimagining Pedagogy

Offering a discipline-specific lens, this post explores how music education is being rethought in light of AI, creativity, and emerging professional realities. Rather than diluting artistic practice, AI becomes a catalyst for re-examining what it means to teach, learn, and create. Within the series, this contribution demonstrates how GenAI conversations translate into authentic curriculum redesign, grounded in disciplinary values rather than generic solutions.

8th December 2025 – Ken McCarthy

Building the Manifesto: How We Got Here and What Comes Next

This reflective piece pulls together many of the threads running through the series, documenting the collaborative process behind the Manifesto for Generative AI in Higher Education. It positions the Manifesto not as a prescriptive policy document but as a living statement shaped by diverse voices, shared concerns, and collective aspiration. In the narrative arc, it represents a moment of synthesis, turning discussion into a shared point of reference.

15th December 2025 – Leigh Graves Wolf

Rebuilding Thought Networks in the Age of AI

Moving from frameworks to cognition, this post explores how AI is reshaping thinking itself. Rather than outsourcing thought, the author argues for intentionally rebuilding intellectual networks so that AI becomes part of, not a replacement for, human sense-making. This contribution deepens the series philosophically, reminding readers that the stakes of GenAI are as much cognitive and epistemic as they are technical.

22nd December 2025 – Frances O’Donnell

Universities: GenAI – There’s No Stopping, Start Shaping!

The series culminates with a clear call to action. Acknowledging both inevitability and responsibility, this post urges universities to move decisively from reaction to leadership. The emphasis is on shaping futures rather than resisting change, grounded in values, purpose, and public good. As a closing note, it captures the spirit of the entire series: GenAI is already here, but how it reshapes higher education remains a choice.

With Thanks – and an Invitation

This series exists because of the generosity, openness, and intellectual courage of its contributors. Each author took the time to reflect publicly, to question assumptions, to share practice, and to contribute thoughtfully to a conversation that is still very much in motion. Collectively, these posts embody the spirit of GenAI:N3 – collaborative, reflective, and committed to shaping the future of higher education with care rather than fear.

We would like to extend our sincere thanks to all who have contributed to the blog to date, and to those who have engaged with the posts through reading, sharing, and discussion. The conversation does not end here. If you are experimenting with generative AI in your teaching, supporting others to do so, grappling with its implications, or working with students as partners in this space, we warmly invite you to write a blog post of your own. Your perspective matters, and your experience can help others navigate this rapidly evolving landscape.

If you would like to contribute, please get in touch (blog@genain3.ie) we would love to hear from you.


DateTitleAuthorLink
29 September 2025Something Wicked This Way ComesJim O’Mahonyhttps://genain3.ie/something-wicked-this-way-comes/
6 October 20253 Things AI Can Do for You: The No-Nonsense GuideDr Yannishttps://genain3.ie/3-things-ai-can-do-for-you-the-no-nonsense-guide/
13 October 2025New Elephants in the Generative AI Room? Acknowledging the Costs of GenAI to Develop ‘Critical AI Literacy’Sue Beckingham & Peter Hartleyhttps://genain3.ie/new-elephants-in-the-generative-ai-room-acknowledging-the-costs-of-genai-to-develop-critical-ai-literacy/
20 October 2025Making Sense of GenAI in Education: From Force Analysis to Pedagogical Copilot AgentsJonathan Sansomhttps://genain3.ie/making-sense-of-genai-in-education-from-force-analysis-to-pedagogical-copilot-agents/
27 October 2025AI Adoption & Education for SMEsPatrick Shieldshttps://genain3.ie/ai-adoption-education-for-smes/
3 November 2025Dr Strange-Syllabus or: How My Students Learned to Mistrust AI and Trust ThemselvesTadhg Blommerdehttps://genain3.ie/dr-strange-syllabus-or-how-my-students-learned-to-mistrust-ai-and-trust-themselves/
10 November 2025AI Could Revolutionise Higher Education in a Way We Did Not ExpectBrian Mulliganhttps://genain3.ie/ai-could-revolutionise-higher-education-in-a-way-we-did-not-expect/
17 November 2025This Is Not the End but a Beginning: Responding to “Something Wicked This Way Comes”Kerith George-Briant & Jack Hoganhttps://genain3.ie/this-is-not-the-end-but-a-beginning-responding-to-something-wicked-this-way-comes/
24 November 2025The Transformative Power of Communities of Practice in AI Upskilling for EducatorsBernie Goldbachhttps://genain3.ie/the-transformative-power-of-communities-of-practice-in-ai-upskilling-for-educators/
1 December 2025Teaching the Future: How Tomorrow’s Music Educators Are Reimagining PedagogyHazel Farrell et al.https://genain3.ie/teaching-the-future-how-tomorrows-music-educators-are-reimagining-pedagogy/
8 December 2025Building the Manifesto: How We Got Here and What Comes NextKen McCarthyhttps://genain3.ie/building-the-manifesto-how-we-got-here-and-what-comes-next/
15 December 2025Rebuilding Thought Networks in the Age of AILeigh Graves Wolfhttps://genain3.ie/rebuilding-thought-networks-in-the-age-of-ai/
22 December 2025Universities: GenAI – There’s No Stopping, Start Shaping!Frances O’Donnellhttps://genain3.ie/universities-genai-theres-no-stopping-start-shaping/

Keywords


HEA – Generative AI in Higher Education Teaching & Learning: Policy Framework


Source

O’Sullivan, James, Colin Lowry, Ross Woods & Tim Conlon. Generative AI in Higher Education Teaching &
Learning: Policy Framework. Higher Education Authority, 2025. DOI: 10.82110/073e-hg66.

Summary

This policy framework provides a national, values-based approach to guiding the adoption of generative artificial intelligence (GenAI) in teaching and learning across Irish higher education institutions. Rather than prescribing uniform rules, it establishes a shared set of principles to support informed, ethical, and pedagogically sound decision-making. The framework recognises GenAI as a structural change to higher education—particularly to learning design, assessment, and academic integrity—requiring coordinated institutional and sector-level responses rather than ad hoc or individual initiatives.

Focused explicitly on teaching and learning, the framework foregrounds five core principles: academic integrity and transparency; equity and inclusion; critical engagement, human oversight, and AI literacy; privacy and data governance; and sustainable pedagogy. It emphasises that GenAI should neither be uncritically embraced nor categorically prohibited. Instead, institutions are encouraged to adopt proportionate, evidence-informed approaches that preserve human judgement, ensure fairness, protect student data, and align AI use with the public mission of higher education. The document also outlines how these principles can be operationalised through governance, assessment redesign, staff development, and continuous sector learning.

Key Points

  • The framework offers a shared national reference point rather than prescriptive rules.
  • GenAI is treated as a systemic pedagogical challenge, not a temporary disruption.
  • Academic integrity depends on transparency, accountability, and visible authorship.
  • Equity and inclusion must be designed into AI adoption from the outset.
  • Human oversight and critical engagement remain central to learning and assessment.
  • AI literacy is positioned as a core capability for staff and students.
  • Privacy, data protection, and institutional data sovereignty are essential.
  • Assessment practices must evolve beyond reliance on traditional written outputs.
  • Sustainability includes both environmental impact and long-term educational quality.
  • Ongoing monitoring and sector-wide learning are critical to responsible adoption.

Conclusion

The HEA Policy Framework positions generative AI as neither a threat to be resisted nor a solution to be uncritically adopted. By grounding AI integration in shared academic values, ethical governance, and pedagogical purpose, it provides Irish higher education with a coherent foundation for navigating AI-enabled change while safeguarding trust, equity, and educational integrity.

Keywords

URL

https://hea.ie/2025/12/22/hea-publishes-national-policy-framework-on-generative-ai-in-teaching-and-learning/

Summary generated by ChatGPT 5.2


Universities: GenAI – There’s No Stopping, Start Shaping!

By Frances O’Donnell, Instructional Designer, ATU
Estimated reading time: 8 minutes
A group of diverse higher education professionals and students standing in front of a modern university building, engaged in a collaborative discussion around a glowing digital interface displaying the Gen.S.A.R. framework. In the foreground, traditional open books and a graduation cap are intertwined with glowing neural network nodes, symbolizing the integration of generative AI with traditional academic foundations. Image (and typos) generated by Nano Banana.
Moving from debate to action: Implementing a cross-departmental strategy to shape the future of GenAI in higher education. Image (and typos) generated by Nano Banana.

Debate continues to swing between those pushing rapid adoption and those advocating caution of GenAI, for example, panic about “AI taking over the classroom” and outrage at Big Tech’s labour practices. Both are important, but are these and other concerns causing inaction? In many cases, we are quietly watching students hand their data and their critical thinking over to the very Big Tech companies we are arguing against (while we still fly on holidays, stream on smart TVs and buy the same devices from the same companies). Pretending that GenAI in education is the one place we finally draw an ethical line, while doing nothing to make its use safer or more equitable, is not helpful. By all means, keep debating, but not at the cost of another three or four cohorts.

This opinion post suggests three things universities should address now: a minimal set of GenAI functions that should be available to staff and students, and a four-step teaching process to help lecturers rethink their role with GenAI.

Three things universities need to address now

1. Tell students and staff clearly what they can use (Déjà vu?)

Students and staff deserve clarity on which GenAI tools they have access to, what they can use them for and which ones are institutionally supported. Has your university provided this? No more grey areas or “ask your lecturer”. If people do not know this, it pushes GenAI use into secrecy. That secrecy hands more power to Big Tech to extract data and embed bias, while also quietly taking away their cognitive ability.

2. Untangle GenAI from “academic integrity”

Tightly linking GenAI to academic integrity was a mistake! It has created an endless debate about whether to permit or prohibit GenAI, which pushes use further underground. At this point, there is no real equity and no real academic integrity. Use of GenAI cannot simply be stopped, proved or disproved, so pretending otherwise, while holding endless anti‑AI discussions, will not lead to a solution. There is no putting GenAI back in the bottle!

3. Treat GenAI as a shared responsibility

GenAI affects curriculum design, assessment, student support, digital literacy, employability, libraries, disability support, IT, policy and everywhere in between. It cannot sit on the shoulders of one department or lead. Every university needs a cross‑departmental AI strategy that includes the student union, academic leads, IT, the data protection office, careers, student support, administration and teaching and learning personnel. Until leadership treats GenAI as systemic, lecturers will keep firefighting contradictions and marking assignments they know were AI-generated. Bring everyone to the table, and don’t adjourn until decisions have been made on students and staff clarity (even if this clarity is dynamic in nature – do not continue to leave them navigating this alone for another three years).

What GenAI functions should be provided

At a minimum, institutions should give safe, equitable access to:

  • A campus-licensed GenAI model
    One model for all staff and students to ask questions, draft, summarise, explain and translate text, including support for multilingual learners.
  • Multimodal creation tools
    Tools to create images, audio, video (including avatars), diagrams, code, etc., with clear ethical and legal guidance.
  • Research support tools
    Tools to support research, transcribing, coding, summaries, theme mapping, citations, etc., that reinforce critical exploration
  • Assessment and teaching design tools
    Tools to draft examples, case variations, rubrics, flashcards, questions, etc., are stored inside institutional systems.
  • Custom agents
    Staff create and share custom AI agents configured for specific purposes: subject-specific scaffolding for students, or workflow agents for planning, resource creation and content adaptation. Keep interactions within institutional systems.
  • Accessibility focused GenAI
    Tools that deliver captions, plain language rewrites, alt text and personalised study materials. Many institutions already have these in place.

Safer GenAI tools for exploration, collaboration and reflection. Now what do they do with them? This is where something like Gen.S.A.R comes in. A potential approach where staff and students explore together with GenAI, and one that is adaptable to different contexts/disciplines.

Gen.S.A.R.

Gen.S.A.R. is simply a suggested starting point; there is no magic wand, but this may help to ignite practical ideas from others. It suggests a shift from passive content delivery to constructivist and experiential learning.

  • GenAI exploration and collaborative knowledge construction
  • Scrutinise and share
  • Apply in real-world contexts with a low or no-tech approach
  • Reflect and evaluate

It keeps critical thinking, collaboration and real-world application at the centre, with GenAI as a set of tools rather than a replacement for learning. Note: GenAI is a set of tools, not a human!

Phase 1: GenAI, constructing, not copy-pasting

Students use GenAI, the lecturer, and reputable sources to explore a concept or problem linked to the learning outcomes. Lecturers guide this exploration as students work individually or in groups. With ongoing lecturer input, students may choose whether to use GenAI or other sources, but all develop an understanding of GenAI’s role in learning.

Phase 2: Scrutinise and Share

The second phase focuses on scrutinising and sharing ideas with others, not just presenting them as finished facts. Students bring GenAI outputs, reputable sources and their own thinking into dialogue. They interrogate evidence, assumptions and perspectives in groups or class discussion (social constructivism, dialogic teaching). The lecturer – the content expert – oversees this process and identifies the errors, draws attention to the errors and helps students clarify GenAI outputs.

Phase 3: Apply, low-tech, real-world

Screens step back. Students apply what they have discovered in low or no-tech ways: diagrams, mind maps, zines, prototypes, role plays, scenarios. They connect what they discovered to real contexts and show understanding through doing, making, explaining and practical application.

Phase 4: Reflect, evaluate and look forward

Students then evaluate and reflect on both their learning process and the role of GenAI. Using written, audio, video or visual reflections, they consider what they learned, how GenAI supported or distorted that learning and how this connects to their future. This reflective work, combined with artefacts from earlier phases, supports peer, self and lecturer assessment and moves us towards competency and readiness-based judgements.

Resourcing Gen.S.A.R. Yes, smaller class sizes and support would be required, but aspects of this can be implemented now (and are being implemented by some already). Time shifts to facilitation, co-learning, process feedback, and authentic evaluation (less three-thousand-word essays). This approach is not perfect but at least it’s an approach and one that draws on long‑standing learning theories, including constructivism, social constructivism, experiential learning, and traditions in inclusive and competency‑based education.

There’s No Stopping It, Time to Shape It

GenAI is not going away. Exploitative labour practices, data abuse and profit motives are real (and not exclusive to AI), and naming these harms is essential, but continuing to let these debates dominate any movement is not helpful. Universities can choose to lead (and I commend, not condemn, those who already are) with clear guidance, equitable access to safe GenAI tools and learning design. The alternative is all the risks associated with students and staff relying on personal accounts and workarounds.

For the integrity of education itself, it is time to translate debates into action. The genie is not going back in the bottle, and our profit-driven society is not only shaped by Big Tech but also by the everyday choices of those of us living privileged lives in westernised societies. It is time to be honest about our own complicity, to step out of the ivory tower and work with higher education students to navigate the impact GenAI is having on their lives right now.

Note: My views on GenAI for younger learners is very different; the suggestions here focus specifically on higher education.

Frances O’Donnell

Instructional Designer
ATU

Exploring the pros and cons of AI & GenAI in education, and indeed in society. Currently completing a Doctorate in Education with a focus on AI & Emerging Technologies.

Passionate about the potential education has to develop one’s self-confidence and self-worth, but frustrated by the fact it often does the opposite. AI has magnified our tendency to overassess and our inability to truly move away from rote learning.

Whether I’m carrying out the role of an instructional designer, or delivering workshops or researching, I think we should work together to make education a catalyst of change where learners are empowered to become confident as well as socially and environmentally conscious members of society. With or without AI, let’s change the perception of what success looks like for young people.

Keywords


Rebuilding Thought Networks in the Age of AI

By Leigh Graves Wolf, University College Dublin Teaching & Learning
Estimated reading time: 5 minutes
A highly conceptual visual showing a partially fragmented human brain structure with new, glowing neural pathways being actively reconnected and rebuilt by a series of fine, digital threads, symbolizing the conscious effort to strengthen cognitive skills amidst reliance on AI. Image (and typos) generated by Nano Banana.
Strengthening the mind: Highlighting the crucial need and methodology for intentionally restructuring and reinforcing human cognitive and critical thinking skills in an environment increasingly dominated by artificial intelligence. Image (and typos) generated by Nano Banana.

Thinking is a social activity. This isn’t a new insight (scholars have studied this for ages) but it’s one I keep coming back to lately as I try to stay afloat in the “AI Era.”

For a long stretch as I developed as an academic, I thought with others through technology (i.e. del.icio.us, Typepad, and Twitter.) We would bounce ideas off each other, glean golden nuggets of information, share resources that sparked new connections in our minds. There was something magical about that era, the serendipitous discovery of a colleague’s bookmark that led you down an unexpected intellectual rabbit hole, or a Twitter thread that challenged your thinking in ways you hadn’t anticipated. These weren’t just tools; they were extensions of our collective scholarly brain.

Then, all of that broke. (And I’m still. not. over it.)

When Our Digital Commons Began to Fracture

Koutropoulos et. al (2024) speak more eloquently to this fragmentation. They capture something I’ve been feeling but fail to articulate as clearly, the way our digital spaces have become increasingly unstable, the way platforms that once felt like home can shift beneath our feet overnight. Their collaborative autoethnography explores the metaphors we use to describe this movement and ultimately concludes that no single term captures what’s happening. What resonates most is their observation that we were never truly in control of these spaces, we were building communities on “the fickle and shifting sands of capitalism.”

Commercial generative AI feels social. You’re chatting, prompting, getting responses that (by design) seem thoughtful and engaged. But fundamentally, it is not social. You’re talking with biased algorithms. There is no human in the loop, no colleague who might push back on your thinking from their own lived experience, no peer who might share a resource you’d never have found on your own, no friend who might simply say “I’ve been thinking about this too.”

I haven’t seen genuine sharing built into any commercial generative AI tools. NotebookLM will let you share content that others can interact with, other tools allow you to create bots – but again, you’re not linking with a human. You’re not building a web of ambient findability (Morville, 2005) that made those early social media days so generative. There’s no AI equivalent of stumbling upon a colleague’s carefully curated collection and thinking, “Oh, they’re interested in this too – I should reach out.”

So in this fragmented, overly connected yet profoundly disconnected world, how do we stay connected to each other and each other’s ideas? I need my thought network now more than ever. And I suspect you do too.

Choosing Human Connection in an Algorithmic Age

Here are a few tools that have helped me navigate this landscape:

Raindrop.io – it’s not as social as del.icio.us was (oh, how I miss those days!), but it is a bookmark management tool that helps me keep track of the deluge of AI articles (and all sorts of other things) coming my way. I’ve made my collection public because (surprise!) I believe in working out loud and sharing what I’m learning. You can find it here: https://raindrop.io/leigh-wolf/ai-62057797.

RSS is Awesome is an “in-progress passion project” by Tom Hazledine. It has now become my morning ritual to open up this lovely, lightweight, no-login-needed, browser-based tool to catch up on my feeds. There’s something deeply satisfying about returning to RSS – a technology that puts the reader in control rather than an algorithm. (And yes, you can add the GenAI:N3 Blog to your feed simply by adding this URL: https://genain3.ie/blog/)

We need each other more than ever to navigate this sea of (mis)information. The platforms are fragmenting, the algorithms are optimising for engagement rather than insight, and AI offers the feeling of conversation without its substance – but we still have each other. We still have the ability to share, to curate, to point each other toward ideas worth wrestling with.

As Koutropoulos et al. (2024) challenge us, the solution isn’t to find the perfect platform – it’s to “take charge of our own data” and to invest in relationships with the humans in our educational networks. The platforms will always ebb and flow. But the connections we build with each other can (and do!) persist across whatever digital landscape emerges next.

Hold on to each other. Hold on to the tools that are enabling rather than disabling us to do this work together. And maybe, just maybe, start (re)building those thought networks – one shared bookmark, one RSS subscription, one genuine human connection at a time.

What tools are helping you stay connected to others’ thinking? What spaces have you found that still feel like home? I would love to know – please reach out in the comments below!!

Reference

Koutropoulos, A., Stewart, B., Singh, L., Sinfield, S., Burns, T., Abegglen, S., Hamon, K., Honeychurch, S., & Bozkurt, A. (2024). Lines of flight: The digital fragmenting of educational networks. Journal of Interactive Media in Education, 2024(1), 11. https://doi.org/10.5334/jime.850

Morville, P. (2005). Ambient findability: What we find changes who we become. O’Reilly.

Leigh Graves Wolf

Assistant Professor
University College Dublin

Leigh Graves Wolf is teacher-scholar and an Assistant Professor in Educational Development with Teaching and Learning at UCD. Her work focuses on online education, critical digital pedagogy, educator professional development and relationships mediated by and with technology. She has worked across the educational spectrum from primary to higher to further and lifelong. She believes passionately in collaboration and community.

Keywords


Australian Framework for Artificial Intelligence in Higher Education


Source

Lodge, J. M., Bower, M., Gulson, K., Henderson, M., Slade, C., & Southgate, E. (2025). Australian Centre for Student Equity and Success, Curtin University

Summary

This framework provides a national roadmap for the ethical, equitable, and effective use of artificial intelligence (AI)—including generative and agentic AI—across Australian higher education. It recognises both the transformative potential and inherent risks of AI, calling for governance structures, policies, and pedagogies that prioritise human flourishing, academic integrity, and cultural inclusion. The framework builds on the Australian Framework for Generative AI in Schools but is tailored to the unique demands of higher education: research integrity, advanced scholarship, and professional formation in AI-enhanced contexts.

Centred around seven guiding principles—human-centred education, inclusive implementation, ethical decision-making, Indigenous knowledges, ethical development, adaptive skills, and evidence-informed innovation—the framework links directly to the Higher Education Standards Framework (Threshold Standards) and the UN Sustainable Development Goals. It emphasises AI literacy, Indigenous data sovereignty, environmental sustainability, and the co-design of equitable AI systems. Implementation guidance includes governance structures, staff training, assessment redesign, cross-institutional collaboration, and a coordinated national research agenda.

Key Points

  • AI in higher education must remain human-centred and ethically governed.
  • Generative and agentic AI should support, not replace, human teaching and scholarship.
  • Institutional AI frameworks must align with equity, inclusion, and sustainability goals.
  • Indigenous knowledge systems and data sovereignty are integral to AI ethics.
  • AI policies should be co-designed with students, staff, and First Nations leaders.
  • Governance requires transparency, fairness, accountability, and contestability.
  • Staff professional learning should address ethical, cultural, and environmental dimensions.
  • Pedagogical design must cultivate adaptive, critical, and reflective learning skills.
  • Sector-wide collaboration and shared national resources are key to sustainability.
  • Continuous evaluation ensures AI enhances educational quality and social good.

Conclusion

The framework positions Australia’s higher education sector to lead in responsible AI adoption. By embedding ethical, equitable, and evidence-based practices, it ensures that AI integration strengthens—not undermines—human expertise, cultural integrity, and educational purpose. It reaffirms universities as stewards of both knowledge and justice in an AI-shaped future.

Keywords

URL

https://www.acses.edu.au/publication/australian-framework-for-artificial-intelligence-in-higher-education/

Summary generated by ChatGPT 5.1