Universities: GenAI – There’s No Stopping, Start Shaping!

By Frances O’Donnell, Instructional Designer, ATU
Estimated reading time: 8 minutes
A group of diverse higher education professionals and students standing in front of a modern university building, engaged in a collaborative discussion around a glowing digital interface displaying the Gen.S.A.R. framework. In the foreground, traditional open books and a graduation cap are intertwined with glowing neural network nodes, symbolizing the integration of generative AI with traditional academic foundations. Image (and typos) generated by Nano Banana.
Moving from debate to action: Implementing a cross-departmental strategy to shape the future of GenAI in higher education. Image (and typos) generated by Nano Banana.

Debate continues to swing between those pushing rapid adoption and those advocating caution of GenAI, for example, panic about “AI taking over the classroom” and outrage at Big Tech’s labour practices. Both are important, but are these and other concerns causing inaction? In many cases, we are quietly watching students hand their data and their critical thinking over to the very Big Tech companies we are arguing against (while we still fly on holidays, stream on smart TVs and buy the same devices from the same companies). Pretending that GenAI in education is the one place we finally draw an ethical line, while doing nothing to make its use safer or more equitable, is not helpful. By all means, keep debating, but not at the cost of another three or four cohorts.

This opinion post suggests three things universities should address now: a minimal set of GenAI functions that should be available to staff and students, and a four-step teaching process to help lecturers rethink their role with GenAI.

Three things universities need to address now

1. Tell students and staff clearly what they can use (Déjà vu?)

Students and staff deserve clarity on which GenAI tools they have access to, what they can use them for and which ones are institutionally supported. Has your university provided this? No more grey areas or “ask your lecturer”. If people do not know this, it pushes GenAI use into secrecy. That secrecy hands more power to Big Tech to extract data and embed bias, while also quietly taking away their cognitive ability.

2. Untangle GenAI from “academic integrity”

Tightly linking GenAI to academic integrity was a mistake! It has created an endless debate about whether to permit or prohibit GenAI, which pushes use further underground. At this point, there is no real equity and no real academic integrity. Use of GenAI cannot simply be stopped, proved or disproved, so pretending otherwise, while holding endless anti‑AI discussions, will not lead to a solution. There is no putting GenAI back in the bottle!

3. Treat GenAI as a shared responsibility

GenAI affects curriculum design, assessment, student support, digital literacy, employability, libraries, disability support, IT, policy and everywhere in between. It cannot sit on the shoulders of one department or lead. Every university needs a cross‑departmental AI strategy that includes the student union, academic leads, IT, the data protection office, careers, student support, administration and teaching and learning personnel. Until leadership treats GenAI as systemic, lecturers will keep firefighting contradictions and marking assignments they know were AI-generated. Bring everyone to the table, and don’t adjourn until decisions have been made on students and staff clarity (even if this clarity is dynamic in nature – do not continue to leave them navigating this alone for another three years).

What GenAI functions should be provided

At a minimum, institutions should give safe, equitable access to:

  • A campus-licensed GenAI model
    One model for all staff and students to ask questions, draft, summarise, explain and translate text, including support for multilingual learners.
  • Multimodal creation tools
    Tools to create images, audio, video (including avatars), diagrams, code, etc., with clear ethical and legal guidance.
  • Research support tools
    Tools to support research, transcribing, coding, summaries, theme mapping, citations, etc., that reinforce critical exploration
  • Assessment and teaching design tools
    Tools to draft examples, case variations, rubrics, flashcards, questions, etc., are stored inside institutional systems.
  • Custom agents
    Staff create and share custom AI agents configured for specific purposes: subject-specific scaffolding for students, or workflow agents for planning, resource creation and content adaptation. Keep interactions within institutional systems.
  • Accessibility focused GenAI
    Tools that deliver captions, plain language rewrites, alt text and personalised study materials. Many institutions already have these in place.

Safer GenAI tools for exploration, collaboration and reflection. Now what do they do with them? This is where something like Gen.S.A.R comes in. A potential approach where staff and students explore together with GenAI, and one that is adaptable to different contexts/disciplines.

Gen.S.A.R.

Gen.S.A.R. is simply a suggested starting point; there is no magic wand, but this may help to ignite practical ideas from others. It suggests a shift from passive content delivery to constructivist and experiential learning.

  • GenAI exploration and collaborative knowledge construction
  • Scrutinise and share
  • Apply in real-world contexts with a low or no-tech approach
  • Reflect and evaluate

It keeps critical thinking, collaboration and real-world application at the centre, with GenAI as a set of tools rather than a replacement for learning. Note: GenAI is a set of tools, not a human!

Phase 1: GenAI, constructing, not copy-pasting

Students use GenAI, the lecturer, and reputable sources to explore a concept or problem linked to the learning outcomes. Lecturers guide this exploration as students work individually or in groups. With ongoing lecturer input, students may choose whether to use GenAI or other sources, but all develop an understanding of GenAI’s role in learning.

Phase 2: Scrutinise and Share

The second phase focuses on scrutinising and sharing ideas with others, not just presenting them as finished facts. Students bring GenAI outputs, reputable sources and their own thinking into dialogue. They interrogate evidence, assumptions and perspectives in groups or class discussion (social constructivism, dialogic teaching). The lecturer – the content expert – oversees this process and identifies the errors, draws attention to the errors and helps students clarify GenAI outputs.

Phase 3: Apply, low-tech, real-world

Screens step back. Students apply what they have discovered in low or no-tech ways: diagrams, mind maps, zines, prototypes, role plays, scenarios. They connect what they discovered to real contexts and show understanding through doing, making, explaining and practical application.

Phase 4: Reflect, evaluate and look forward

Students then evaluate and reflect on both their learning process and the role of GenAI. Using written, audio, video or visual reflections, they consider what they learned, how GenAI supported or distorted that learning and how this connects to their future. This reflective work, combined with artefacts from earlier phases, supports peer, self and lecturer assessment and moves us towards competency and readiness-based judgements.

Resourcing Gen.S.A.R. Yes, smaller class sizes and support would be required, but aspects of this can be implemented now (and are being implemented by some already). Time shifts to facilitation, co-learning, process feedback, and authentic evaluation (less three-thousand-word essays). This approach is not perfect but at least it’s an approach and one that draws on long‑standing learning theories, including constructivism, social constructivism, experiential learning, and traditions in inclusive and competency‑based education.

There’s No Stopping It, Time to Shape It

GenAI is not going away. Exploitative labour practices, data abuse and profit motives are real (and not exclusive to AI), and naming these harms is essential, but continuing to let these debates dominate any movement is not helpful. Universities can choose to lead (and I commend, not condemn, those who already are) with clear guidance, equitable access to safe GenAI tools and learning design. The alternative is all the risks associated with students and staff relying on personal accounts and workarounds.

For the integrity of education itself, it is time to translate debates into action. The genie is not going back in the bottle, and our profit-driven society is not only shaped by Big Tech but also by the everyday choices of those of us living privileged lives in westernised societies. It is time to be honest about our own complicity, to step out of the ivory tower and work with higher education students to navigate the impact GenAI is having on their lives right now.

Note: My views on GenAI for younger learners is very different; the suggestions here focus specifically on higher education.

Frances O’Donnell

Instructional Designer
ATU

Exploring the pros and cons of AI & GenAI in education, and indeed in society. Currently completing a Doctorate in Education with a focus on AI & Emerging Technologies.

Passionate about the potential education has to develop one’s self-confidence and self-worth, but frustrated by the fact it often does the opposite. AI has magnified our tendency to overassess and our inability to truly move away from rote learning.

Whether I’m carrying out the role of an instructional designer, or delivering workshops or researching, I think we should work together to make education a catalyst of change where learners are empowered to become confident as well as socially and environmentally conscious members of society. With or without AI, let’s change the perception of what success looks like for young people.

Keywords


Rebuilding Thought Networks in the Age of AI

By Leigh Graves Wolf, University College Dublin Teaching & Learning
Estimated reading time: 5 minutes
A highly conceptual visual showing a partially fragmented human brain structure with new, glowing neural pathways being actively reconnected and rebuilt by a series of fine, digital threads, symbolizing the conscious effort to strengthen cognitive skills amidst reliance on AI. Image (and typos) generated by Nano Banana.
Strengthening the mind: Highlighting the crucial need and methodology for intentionally restructuring and reinforcing human cognitive and critical thinking skills in an environment increasingly dominated by artificial intelligence. Image (and typos) generated by Nano Banana.

Thinking is a social activity. This isn’t a new insight (scholars have studied this for ages) but it’s one I keep coming back to lately as I try to stay afloat in the “AI Era.”

For a long stretch as I developed as an academic, I thought with others through technology (i.e. del.icio.us, Typepad, and Twitter.) We would bounce ideas off each other, glean golden nuggets of information, share resources that sparked new connections in our minds. There was something magical about that era, the serendipitous discovery of a colleague’s bookmark that led you down an unexpected intellectual rabbit hole, or a Twitter thread that challenged your thinking in ways you hadn’t anticipated. These weren’t just tools; they were extensions of our collective scholarly brain.

Then, all of that broke. (And I’m still. not. over it.)

When Our Digital Commons Began to Fracture

Koutropoulos et. al (2024) speak more eloquently to this fragmentation. They capture something I’ve been feeling but fail to articulate as clearly, the way our digital spaces have become increasingly unstable, the way platforms that once felt like home can shift beneath our feet overnight. Their collaborative autoethnography explores the metaphors we use to describe this movement and ultimately concludes that no single term captures what’s happening. What resonates most is their observation that we were never truly in control of these spaces, we were building communities on “the fickle and shifting sands of capitalism.”

Commercial generative AI feels social. You’re chatting, prompting, getting responses that (by design) seem thoughtful and engaged. But fundamentally, it is not social. You’re talking with biased algorithms. There is no human in the loop, no colleague who might push back on your thinking from their own lived experience, no peer who might share a resource you’d never have found on your own, no friend who might simply say “I’ve been thinking about this too.”

I haven’t seen genuine sharing built into any commercial generative AI tools. NotebookLM will let you share content that others can interact with, other tools allow you to create bots – but again, you’re not linking with a human. You’re not building a web of ambient findability (Morville, 2005) that made those early social media days so generative. There’s no AI equivalent of stumbling upon a colleague’s carefully curated collection and thinking, “Oh, they’re interested in this too – I should reach out.”

So in this fragmented, overly connected yet profoundly disconnected world, how do we stay connected to each other and each other’s ideas? I need my thought network now more than ever. And I suspect you do too.

Choosing Human Connection in an Algorithmic Age

Here are a few tools that have helped me navigate this landscape:

Raindrop.io – it’s not as social as del.icio.us was (oh, how I miss those days!), but it is a bookmark management tool that helps me keep track of the deluge of AI articles (and all sorts of other things) coming my way. I’ve made my collection public because (surprise!) I believe in working out loud and sharing what I’m learning. You can find it here: https://raindrop.io/leigh-wolf/ai-62057797.

RSS is Awesome is an “in-progress passion project” by Tom Hazledine. It has now become my morning ritual to open up this lovely, lightweight, no-login-needed, browser-based tool to catch up on my feeds. There’s something deeply satisfying about returning to RSS – a technology that puts the reader in control rather than an algorithm. (And yes, you can add the GenAI:N3 Blog to your feed simply by adding this URL: https://genain3.ie/blog/)

We need each other more than ever to navigate this sea of (mis)information. The platforms are fragmenting, the algorithms are optimising for engagement rather than insight, and AI offers the feeling of conversation without its substance – but we still have each other. We still have the ability to share, to curate, to point each other toward ideas worth wrestling with.

As Koutropoulos et al. (2024) challenge us, the solution isn’t to find the perfect platform – it’s to “take charge of our own data” and to invest in relationships with the humans in our educational networks. The platforms will always ebb and flow. But the connections we build with each other can (and do!) persist across whatever digital landscape emerges next.

Hold on to each other. Hold on to the tools that are enabling rather than disabling us to do this work together. And maybe, just maybe, start (re)building those thought networks – one shared bookmark, one RSS subscription, one genuine human connection at a time.

What tools are helping you stay connected to others’ thinking? What spaces have you found that still feel like home? I would love to know – please reach out in the comments below!!

Reference

Koutropoulos, A., Stewart, B., Singh, L., Sinfield, S., Burns, T., Abegglen, S., Hamon, K., Honeychurch, S., & Bozkurt, A. (2024). Lines of flight: The digital fragmenting of educational networks. Journal of Interactive Media in Education, 2024(1), 11. https://doi.org/10.5334/jime.850

Morville, P. (2005). Ambient findability: What we find changes who we become. O’Reilly.

Leigh Graves Wolf

Assistant Professor
University College Dublin

Leigh Graves Wolf is teacher-scholar and an Assistant Professor in Educational Development with Teaching and Learning at UCD. Her work focuses on online education, critical digital pedagogy, educator professional development and relationships mediated by and with technology. She has worked across the educational spectrum from primary to higher to further and lifelong. She believes passionately in collaboration and community.

Keywords


Australian Framework for Artificial Intelligence in Higher Education


Source

Lodge, J. M., Bower, M., Gulson, K., Henderson, M., Slade, C., & Southgate, E. (2025). Australian Centre for Student Equity and Success, Curtin University

Summary

This framework provides a national roadmap for the ethical, equitable, and effective use of artificial intelligence (AI)—including generative and agentic AI—across Australian higher education. It recognises both the transformative potential and inherent risks of AI, calling for governance structures, policies, and pedagogies that prioritise human flourishing, academic integrity, and cultural inclusion. The framework builds on the Australian Framework for Generative AI in Schools but is tailored to the unique demands of higher education: research integrity, advanced scholarship, and professional formation in AI-enhanced contexts.

Centred around seven guiding principles—human-centred education, inclusive implementation, ethical decision-making, Indigenous knowledges, ethical development, adaptive skills, and evidence-informed innovation—the framework links directly to the Higher Education Standards Framework (Threshold Standards) and the UN Sustainable Development Goals. It emphasises AI literacy, Indigenous data sovereignty, environmental sustainability, and the co-design of equitable AI systems. Implementation guidance includes governance structures, staff training, assessment redesign, cross-institutional collaboration, and a coordinated national research agenda.

Key Points

  • AI in higher education must remain human-centred and ethically governed.
  • Generative and agentic AI should support, not replace, human teaching and scholarship.
  • Institutional AI frameworks must align with equity, inclusion, and sustainability goals.
  • Indigenous knowledge systems and data sovereignty are integral to AI ethics.
  • AI policies should be co-designed with students, staff, and First Nations leaders.
  • Governance requires transparency, fairness, accountability, and contestability.
  • Staff professional learning should address ethical, cultural, and environmental dimensions.
  • Pedagogical design must cultivate adaptive, critical, and reflective learning skills.
  • Sector-wide collaboration and shared national resources are key to sustainability.
  • Continuous evaluation ensures AI enhances educational quality and social good.

Conclusion

The framework positions Australia’s higher education sector to lead in responsible AI adoption. By embedding ethical, equitable, and evidence-based practices, it ensures that AI integration strengthens—not undermines—human expertise, cultural integrity, and educational purpose. It reaffirms universities as stewards of both knowledge and justice in an AI-shaped future.

Keywords

URL

https://www.acses.edu.au/publication/australian-framework-for-artificial-intelligence-in-higher-education/

Summary generated by ChatGPT 5.1


Building the Manifesto: How We Got Here and What Comes Next

By Ken McCarthy
Estimated reading time: 6 minutes
A minimalist illustration featuring the silhouette of a person standing and gazing toward a horizon line formed by soft, glowing digital patterns and abstract light streams. The scene blends naturalistic contemplation with modern technology, symbolizing human agency in shaping the future of AI against a clean, neutral background. Image (and typos) generated by Nano Banana.
Looking ahead: As we navigate the complexities of generative AI in higher education, it is crucial to remember that technology does not dictate our path. Through ethical inquiry and reimagined learning, the horizon is still ours to shape. Image (and typos) generated by Nano Banana.

When Hazel and I started working with GenAI in higher education, we did not set out to write a manifesto. We were simply trying to make sense of a fast-moving landscape. GenAI arrived quickly, finding its way into classrooms and prompting new questions about academic integrity and AI integration long before we had time to work through what it all meant. Students were experimenting earlier than many staff felt prepared for. Policies were still forming.

What eventually became the Manifesto for Generative AI in Higher Education began as our attempt to capture our thoughts. Not a policy, not a fully fledged framework, not a strategy. Just a way to hold the questions, principles, and tensions that kept surfacing. It took shape through notes gathered in margins, comments shared after workshops, ideas exchanged in meetings, and moments in teaching sessions that stayed with us long after they ended. It was never a single project. It gathered itself slowly.

From the start, we wanted it to be a short read that opened the door to big ideas. The sector already has plenty of documents that run to seventy or eighty pages. Many of them are helpful, but they can be difficult to take into a team meeting or a coffee break. We wanted something different. Something that could be read in ten minutes, but still spark thought and conversation. A series of concise statements that felt recognisable to anyone grappling with the challenges and possibilities of GenAI. A document that holds principles without pretending to offer every answer. We took inspiration from the Edinburgh Manifesto for Teaching Online, which reminded us that a series of short, honest statements can travel further than a long policy ever will.

The manifesto is a living reflection. It recognises that we stand at a threshold between what learning has been and what it might become. GenAI brings possibility and uncertainty together, and our role is to respond with imagination and integrity to keep learning a deeply human act .

Three themes shaped the work

As the ideas settled, three themes emerged that helped give structure to the thirty statements.

Rethinking teaching and learning responds to an age of abundance. Information is everywhere. The task of teaching shifts toward helping students interpret, critique, and question rather than collect. Inquiry becomes central. Several statements address this shift, emphasising that GenAI does not replace thinking. It reveals the cost of not thinking. They point toward assessment design that rewards insight over detection and remind us that curiosity drives learning in ways that completion never can .

Responsibility, ethics, and power acknowledges that GenAI is shaped by datasets, values, and omissions. It is not neutral. This theme stresses transparency, ethical leadership, and the continuing importance of academic judgement. It challenges institutions to act with care, not just efficiency. It highlights that prompting is an academic skill, not a technical trick, and that GenAI looks different in every discipline, which means no single approach will fit all contexts.

Imagination, humanity, and the future encourages us to look beyond the disruption of the present moment and ask what we want higher education to become. It holds inclusion as a requirement rather than an aspiration. It names sustainability as a learning outcome. It insists that ethics belong at the beginning of design processes. It ends with the reminder that the horizon is still ours to shape and that the future classroom is a conversation where people and systems learn in dialogue without losing sight of human purpose

How it came together

The writing process was iterative. Some statements arrived whole. Others needed several attempts. We removed the ones that tried to do too much and kept the ones that stayed clear in the mind after a few days. We read them aloud to test the rhythm. The text only settled into its final shape once we noticed the three themes forming naturally.

The feedback from our reviewers, Tom Farrelly and Sue Beckingham, strengthened the final version. Their comments helped us tighten the language and balance the tone. The manifesto may have two named authors, but it is built from many voices.

Early responses from the sector

In the short time since the manifesto was released, the webpage has been visited by more than 750 people from 40 countries. For a document that began as a few lines in a notebook, this has been encouraging. It suggests the concerns and questions we tried to capture are widely shared. More importantly, it signals that there is an appetite for a conversation that is thoughtful, practical, and honest about the pace of change.

This early engagement reinforces something we felt from the start. The manifesto is only the beginning. It is not a destination. It is a point of departure for a shared journey.

Next steps: a book of voices across the sector

To continue that journey, we are developing a book of short essays and chapters that respond to the manifesto. Each contribution will explore a statement within the document. The chapters will be around 1,000 words. They can draw on practice, research, disciplinary experience, student partnership, leadership, policy, or critique. They can support, question, or challenge the manifesto. The aim is not agreement. The aim is insight.

We want to bring together educators, librarians, technologists, academic developers, researchers, students, and professional staff. The only requirement is that contributors have something to say about how GenAI is affecting their work, their discipline, or their students.

An invitation to join us

If you would like to contribute, we would welcome your expression of interest. You do not need specialist expertise in AI. You only need a perspective that might help the sector move forward with clarity and confidence.

Your chapter should reflect on a single statement. It could highlight emerging practice or ask questions that do not yet have answers. It could bring a disciplinary lens or a broader institutional one.

The manifesto was built from shared conversations. The next stage will be shaped by an even wider community. If this work is going to stay alive, it needs many hands.

The horizon is still ours to shape. If you would like to help shape it with us, please submit an expression of interest through the following link: https://forms.gle/fGTR9tkZrK1EeoLH8

Ken McCarthy

Head of Centre for Academic Practice
South East Technological University

As Head of the Centre for Academic Practice at SETU, I lead strategic initiatives to enhance teaching, learning, and assessment across the university. I work collaboratively with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education.
I’m passionate about fostering academic excellence through professional development, curriculum design, and scholarship of teaching and learning. I also support and drive innovation in digital pedagogy and learning spaces.

Keywords


AI May Be Scoring Your College Essay: Welcome to the New Era of Admissions


A stylized visual showing a college application essay page with glowing red marks and scores being assigned by a disembodied robotic hand emerging from a digital screen, symbolizing the automated and impersonal nature of AI-driven admissions scoring. Image (and typos) generated by Nano Banana.
The gatekeepers go digital: Welcome to the new era of college admissions, where artificial intelligence is increasingly being used to evaluate student essays, fundamentally changing the application process. Image (and typos) generated by Nano Banana.

Source

AP News

Summary

This article explores the expanding use of AI systems in U.S. university admissions processes. As applicant numbers rise and timelines tighten, institutions are increasingly turning to AI tools to assist in reviewing essays, evaluating transcripts and identifying key indicators of academic readiness. Supporters of AI-assisted admissions argue that the tools offer efficiency gains, help standardise evaluation criteria and reduce human workload. Critics raise concerns about fairness, particularly regarding students whose writing styles or backgrounds may not align with the patterns AI systems are trained to recognise. Additionally, the article notes a lack of transparency from some institutions about how heavily they rely on AI in decision-making, prompting public scrutiny and calls for clearer communication. The broader significance lies in AI’s movement beyond teaching and assessment into high-stakes decision processes that affect students’ educational and career trajectories. The piece concludes that institutions adopting AI must implement strong auditing mechanisms and maintain human oversight to ensure integrity and trust.

Key Points

  • AI now used in admissions decision-making.
  • Faster processing of applications.
  • Concerns about bias and fairness.
  • Public criticism where transparency lacking.
  • Indicates AI entering core institutional processes.

Keywords

URL

https://apnews.com/article/87802788683ca4831bf1390078147a6f

Summary generated by ChatGPT 5.1