Australian Framework for Artificial Intelligence in Higher Education


Source

Lodge, J. M., Bower, M., Gulson, K., Henderson, M., Slade, C., & Southgate, E. (2025). Australian Centre for Student Equity and Success, Curtin University

Summary

This framework provides a national roadmap for the ethical, equitable, and effective use of artificial intelligence (AI)—including generative and agentic AI—across Australian higher education. It recognises both the transformative potential and inherent risks of AI, calling for governance structures, policies, and pedagogies that prioritise human flourishing, academic integrity, and cultural inclusion. The framework builds on the Australian Framework for Generative AI in Schools but is tailored to the unique demands of higher education: research integrity, advanced scholarship, and professional formation in AI-enhanced contexts.

Centred around seven guiding principles—human-centred education, inclusive implementation, ethical decision-making, Indigenous knowledges, ethical development, adaptive skills, and evidence-informed innovation—the framework links directly to the Higher Education Standards Framework (Threshold Standards) and the UN Sustainable Development Goals. It emphasises AI literacy, Indigenous data sovereignty, environmental sustainability, and the co-design of equitable AI systems. Implementation guidance includes governance structures, staff training, assessment redesign, cross-institutional collaboration, and a coordinated national research agenda.

Key Points

  • AI in higher education must remain human-centred and ethically governed.
  • Generative and agentic AI should support, not replace, human teaching and scholarship.
  • Institutional AI frameworks must align with equity, inclusion, and sustainability goals.
  • Indigenous knowledge systems and data sovereignty are integral to AI ethics.
  • AI policies should be co-designed with students, staff, and First Nations leaders.
  • Governance requires transparency, fairness, accountability, and contestability.
  • Staff professional learning should address ethical, cultural, and environmental dimensions.
  • Pedagogical design must cultivate adaptive, critical, and reflective learning skills.
  • Sector-wide collaboration and shared national resources are key to sustainability.
  • Continuous evaluation ensures AI enhances educational quality and social good.

Conclusion

The framework positions Australia’s higher education sector to lead in responsible AI adoption. By embedding ethical, equitable, and evidence-based practices, it ensures that AI integration strengthens—not undermines—human expertise, cultural integrity, and educational purpose. It reaffirms universities as stewards of both knowledge and justice in an AI-shaped future.

Keywords

URL

https://www.acses.edu.au/publication/australian-framework-for-artificial-intelligence-in-higher-education/

Summary generated by ChatGPT 5.1


Building the Manifesto: How We Got Here and What Comes Next

By Ken McCarthy
Estimated reading time: 6 minutes
A minimalist illustration featuring the silhouette of a person standing and gazing toward a horizon line formed by soft, glowing digital patterns and abstract light streams. The scene blends naturalistic contemplation with modern technology, symbolizing human agency in shaping the future of AI against a clean, neutral background. Image (and typos) generated by Nano Banana.
Looking ahead: As we navigate the complexities of generative AI in higher education, it is crucial to remember that technology does not dictate our path. Through ethical inquiry and reimagined learning, the horizon is still ours to shape. Image (and typos) generated by Nano Banana.

When Hazel and I started working with GenAI in higher education, we did not set out to write a manifesto. We were simply trying to make sense of a fast-moving landscape. GenAI arrived quickly, finding its way into classrooms and prompting new questions about academic integrity and AI integration long before we had time to work through what it all meant. Students were experimenting earlier than many staff felt prepared for. Policies were still forming.

What eventually became the Manifesto for Generative AI in Higher Education began as our attempt to capture our thoughts. Not a policy, not a fully fledged framework, not a strategy. Just a way to hold the questions, principles, and tensions that kept surfacing. It took shape through notes gathered in margins, comments shared after workshops, ideas exchanged in meetings, and moments in teaching sessions that stayed with us long after they ended. It was never a single project. It gathered itself slowly.

From the start, we wanted it to be a short read that opened the door to big ideas. The sector already has plenty of documents that run to seventy or eighty pages. Many of them are helpful, but they can be difficult to take into a team meeting or a coffee break. We wanted something different. Something that could be read in ten minutes, but still spark thought and conversation. A series of concise statements that felt recognisable to anyone grappling with the challenges and possibilities of GenAI. A document that holds principles without pretending to offer every answer. We took inspiration from the Edinburgh Manifesto for Teaching Online, which reminded us that a series of short, honest statements can travel further than a long policy ever will.

The manifesto is a living reflection. It recognises that we stand at a threshold between what learning has been and what it might become. GenAI brings possibility and uncertainty together, and our role is to respond with imagination and integrity to keep learning a deeply human act .

Three themes shaped the work

As the ideas settled, three themes emerged that helped give structure to the thirty statements.

Rethinking teaching and learning responds to an age of abundance. Information is everywhere. The task of teaching shifts toward helping students interpret, critique, and question rather than collect. Inquiry becomes central. Several statements address this shift, emphasising that GenAI does not replace thinking. It reveals the cost of not thinking. They point toward assessment design that rewards insight over detection and remind us that curiosity drives learning in ways that completion never can .

Responsibility, ethics, and power acknowledges that GenAI is shaped by datasets, values, and omissions. It is not neutral. This theme stresses transparency, ethical leadership, and the continuing importance of academic judgement. It challenges institutions to act with care, not just efficiency. It highlights that prompting is an academic skill, not a technical trick, and that GenAI looks different in every discipline, which means no single approach will fit all contexts.

Imagination, humanity, and the future encourages us to look beyond the disruption of the present moment and ask what we want higher education to become. It holds inclusion as a requirement rather than an aspiration. It names sustainability as a learning outcome. It insists that ethics belong at the beginning of design processes. It ends with the reminder that the horizon is still ours to shape and that the future classroom is a conversation where people and systems learn in dialogue without losing sight of human purpose

How it came together

The writing process was iterative. Some statements arrived whole. Others needed several attempts. We removed the ones that tried to do too much and kept the ones that stayed clear in the mind after a few days. We read them aloud to test the rhythm. The text only settled into its final shape once we noticed the three themes forming naturally.

The feedback from our reviewers, Tom Farrelly and Sue Beckingham, strengthened the final version. Their comments helped us tighten the language and balance the tone. The manifesto may have two named authors, but it is built from many voices.

Early responses from the sector

In the short time since the manifesto was released, the webpage has been visited by more than 750 people from 40 countries. For a document that began as a few lines in a notebook, this has been encouraging. It suggests the concerns and questions we tried to capture are widely shared. More importantly, it signals that there is an appetite for a conversation that is thoughtful, practical, and honest about the pace of change.

This early engagement reinforces something we felt from the start. The manifesto is only the beginning. It is not a destination. It is a point of departure for a shared journey.

Next steps: a book of voices across the sector

To continue that journey, we are developing a book of short essays and chapters that respond to the manifesto. Each contribution will explore a statement within the document. The chapters will be around 1,000 words. They can draw on practice, research, disciplinary experience, student partnership, leadership, policy, or critique. They can support, question, or challenge the manifesto. The aim is not agreement. The aim is insight.

We want to bring together educators, librarians, technologists, academic developers, researchers, students, and professional staff. The only requirement is that contributors have something to say about how GenAI is affecting their work, their discipline, or their students.

An invitation to join us

If you would like to contribute, we would welcome your expression of interest. You do not need specialist expertise in AI. You only need a perspective that might help the sector move forward with clarity and confidence.

Your chapter should reflect on a single statement. It could highlight emerging practice or ask questions that do not yet have answers. It could bring a disciplinary lens or a broader institutional one.

The manifesto was built from shared conversations. The next stage will be shaped by an even wider community. If this work is going to stay alive, it needs many hands.

The horizon is still ours to shape. If you would like to help shape it with us, please submit an expression of interest through the following link: https://forms.gle/fGTR9tkZrK1EeoLH8

Ken McCarthy

Head of Centre for Academic Practice
South East Technological University

As Head of the Centre for Academic Practice at SETU, I lead strategic initiatives to enhance teaching, learning, and assessment across the university. I work collaboratively with academic staff, professional teams, and students to promote inclusive, research-informed, and digitally enriched education.
I’m passionate about fostering academic excellence through professional development, curriculum design, and scholarship of teaching and learning. I also support and drive innovation in digital pedagogy and learning spaces.

Keywords


Report Reveals Potential of AI to Help UK Higher Education Sector Assess Its Research More Efficiently and Fairly


A stylized visual showing a network of research papers and data graphs being analyzed and sorted by a glowing, benevolent AI interface (represented by a digital hand) over the map of the United Kingdom, symbolizing efficiency and impartial assessment in academia. Image (and typos) generated by Nano Banana.
Streamlining academia: A new report illuminates how artificial intelligence can be leveraged to introduce greater efficiency and fairness into the complex process of assessing research within the UK’s higher education sector. Image (and typos) generated by Nano Banana.

Source

University of Bristol

Summary

This report highlights how UK universities are beginning to integrate generative AI into research assessment processes, marking a significant shift in institutional workflows. Early pilot programmes suggest that AI can assist in evaluating research outputs, managing reviewer assignments and streamlining administrative tasks associated with national research exercises. The potential benefits include increased consistency across assessments, reduced administrative burden and enhanced scalability for institutions with extensive research portfolios. Despite these advantages, the report underscores the importance of strong governance structures, transparent methodological frameworks and ongoing human oversight to ensure fairness, academic integrity and alignment with sector norms. The emerging consensus is that AI should serve as an augmenting tool rather than a replacement for expert judgement. Institutions are encouraged to take a measured approach that balances innovation with ethical responsibility while exploring long-term strategies for responsible adoption and sector-wide coordination. This marks a shift from viewing AI as a hypothetical tool for research assessment to recognising it as an active component of evolving academic practice.

Key Points

  • GenAI already used in UK HE for research assessment.
  • Potential efficiency gains in processing large volumes of research.
  • Increased standardisation of evaluation.
  • Governance and oversight essential.
  • Recommends controlled scaling across sector.

Keywords

URL

https://www.bristol.ac.uk/news/2025/november/report-reveals-potential-of-ai-to-help-assess-research-more-efficiently-.html

Summary generated by ChatGPT 5.1


The Transformative Power of Communities of Practice in AI Upskilling for Educators

By Bernie Goldbach, RUN EU SAP Lead
Estimated reading time: 5 minutes
A diverse group of five educators collaboratively studying a glowing, holographic network of digital lines and nodes on a table, symbolizing their shared learning and upskilling in Artificial Intelligence (AI) within a modern, book-lined academic setting. Image (and typos) generated by Nano Banana.
The power of collaboration: Communities of Practice are essential for educators to collectively navigate and integrate new AI technologies, transforming teaching and learning through shared knowledge and support. Image (and typos) generated by Nano Banana.

When the N-TUTORR programme ended in Ireland, I remained seated in the main Edtech25 auditorium to hear some of the final conversations by key players. They stood at a remarkable intersection of professional development and technological innovation. And some of them issued a call to action for continued conversation, perhaps engaging with generative AI tools within a Community of Practice (CoP).

Throughout my 40 year teaching career, I have walked pathways to genuine job satisfaction that extended far beyond simple skill acquisition. In my specific case, this satisfaction emerged from the synergy between collaborative learning, pedagogical innovation, and an excitement that the uncharted territory is unfolding alongside peers who share their commitment to educational excellence.

Finding Professional Fulfillment Through Shared Learning

The journey of upskilling in generative AI feels overwhelming when undertaken in isolation. I am still looking for a structured CoP for Generativism in Education. This would be a rich vein of collective discovery. At the moment, I have three colleagues who help me develop my skills with ethical and sustainable use of AI.

Ethan Mollick, whose research at the Wharton School has illuminated the practical applications of AI in educational contexts, consistently emphasises that the most effective learning about AI tools happens through shared experimentation and peer discussion. His work demonstrates that educators who engage collaboratively with AI technologies develop more sophisticated mental models of how these tools can enhance rather than replace pedagogical expertise. This collaborative approach alleviates the anxiety many educators feel about technological change, replacing it with curiosity and professional confidence.

Mairéad Pratschke, whose work emphasises the importance of collaborative professional learning, has highlighted how communities create safe spaces where educators can experiment, fail, and succeed together without judgment. This psychological safety becomes the foundation upon which genuine professional growth occurs.

Frances O’Donnell, whose insights at major conferences have become invaluable resources for educators navigating the AI landscape, directs the most effective AI workshops I have attended. O’Donnell’s hands-on training at conferences such as CESI (https://www.cesi.ie), EDULEARN (https://iceri.org), ILTA (https://ilta.ie), and Online Educa Berlin (https://oeb.global) have illuminated the engaging features of instructional design that emerge when educators thoughtfully integrate AI tools. Her instructional design frameworks demonstrate how AI can support the creation of personalised learning pathways, adaptive assessments, and multimodal content that engages diverse learners. O’Donnell’s emphasis on the human element in AI-assisted design resonates deeply with Communities of Practice

And thanks to Frances O’Donnell, I discovered the AI assistants inside H5P.

Elevating Instructional Design Through AI-Assisted Tools

The quality of instructional design, personified by clever educators, represents the most significant leap I have made when combining AI tools with collaborative professional learning. The commercial version of H5P (https://h5p.com) has revolutionised my workflow when creating interactive educational content. The smart import feature of H5P.com complements my teaching practice. I can quickly design rich, engaging learning experiences that would previously have required specialised technical skills or significant time investments. I have discovered ways to create everything from interactive videos with embedded questions to gamified quizzes and sophisticated branching scenarios.

I hope I find a CoP in Ireland that is interested in several of the H5P workflows I have adopted. For the moment, I’m revealing these remarkable capabilities while meeting people at education events in Belgium, Spain, Portugal, and the Netherlands. It feels like I’m a town crier who has a notebook full of shared templates. I want to offer links to the interactive content that I have created with H5P AI and gain feedback from interested colleagues. But more than the conversations at the conferences, I’m interested in making real connections with educators who want to actively participate in vibrant online communities where sustained professional learning continues.

Sustaining Innovation with Community

Job satisfaction among educators has always been closely tied to their sense of efficacy and their ability to make meaningful impacts on student learning. Communities of Practice focused on AI upskilling amplify this satisfaction by creating networks of mutual support where members celebrate innovations, troubleshoot challenges, and collectively develop best practices. When an educator discovers an effective way to use AI for differentiation or assessment design, sharing that discovery with colleagues who understand the pedagogical context creates a profound sense of professional contribution.

These communities also combat the professional tension that currently faces proficient AI users. Mollick’s observations about blowback against widespread AI adoption in education reveal a critical imperative to stand together with a network that validates the quality of teaching and provides constructive feedback. When sharing with a community, individual risk-taking morphs into collective innovation, making the professional development experience inherently more satisfying and sustainable.

We need the spark of N-TUTORR inside an AI-focused Community of Practice. We need to amplify voices. Together we need to become confident navigators of innovation. We need to co-create contextually appropriate pedagogical approaches that effectively leverage AI in education.


Keywords


A History Professor Says AI Did Not Break College; It Exposed How Broken It Already Was


A dramatic, conceptual image showing a crumbling, old-fashioned column (representing "Traditional College Structure") with cracks widening as digital light and AI code seep into the fissures, emphasizing that AI revealed existing weaknesses rather than caused the damage. Image (and typos) generated by Nano Banana.
Unmasking the flaws: A history professor’s perspective suggesting that AI merely shone a light on the structural vulnerabilities and existing problems within higher education, rather than being the sole source of disruption. Image (and typos) generated by Nano Banana.

Source

Business Insider

Summary

This article features a U.S. history professor who argues that generative AI did not cause the crisis currently unfolding in higher education but instead revealed long-standing structural flaws. According to the professor, AI has exposed weaknesses in assessment design, unclear expectations placed on students and unsustainable workloads carried by academic staff. The sudden visibility of AI-generated essays and assignments has forced institutions to confront the limitations of traditional assessment models that rely heavily on polished written output rather than demonstrated cognitive processes. The professor notes that AI has unintentionally highlighted inequities in student preparation, inconsistencies in grading norms and the mismatch between institutional rhetoric and actual resourcing. Rather than attempting to suppress AI, the article argues that higher education should treat this moment as an opportunity to redesign curricula, diversify assessments and rethink the broader purpose of university education. The piece positions AI as a catalyst for long-overdue reform, emphasising that genuine improvement will require institutions to invest in pedagogical redesign, staff support and clearer communication around learning outcomes.

Key Points

  • AI highlighted systemic weaknesses already present in higher education
  • Exposed flaws in assessment design and grading expectations
  • Revealed pressures on overworked teaching staff
  • Suggests AI could drive constructive reform
  • Encourages rethinking pedagogy and institutional priorities

Keywords

URL

https://www.businessinsider.com/ai-didnt-break-college-it-exposed-broken-system-professor-2025-11

Summary generated by ChatGPT 5.1