Enacting Assessment Reform in a Time of Artificial Intelligence


Source

Tertiary Education Quality and Standards Agency (TEQSA), Australian Government

Summary

This resource addresses how Australian higher education can reform assessment in response to the rise of generative AI. Building on earlier work (Assessment Reform for the Age of Artificial Intelligence), it sets out strategies that align with the Higher Education Standards Framework while acknowledging that gen AI is now ubiquitous in student learning and professional practice. The central message is that detection alone is insufficient; instead, assessment must be redesigned to assure learning authentically, ethically, and sustainably.

The report outlines three main pathways: (1) program-wide assessment reform, which integrates assessment as a coherent system across degrees; (2) unit/subject-level assurance of learning, where each subject includes at least one secure assessment task; and (3) a hybrid approach combining both. Each pathway carries distinct advantages and challenges, from institutional resourcing and staff coordination to maintaining program coherence and addressing integrity risks. Critical across all approaches is the need to balance immediate integrity concerns with long-term goals of preparing students for an AI-integrated future.

Key Points

  • Generative AI necessitates structural assessment reform, not reliance on detection.
  • Assessments must equip students to participate ethically and critically in an AI-enabled society.
  • Assurance of learning requires multiple, inclusive, and contextualised approaches.
  • Program-level reform provides coherence and alignment but demands significant institutional commitment.
  • Unit-level assurance offers quick implementation but risks fragmentation.
  • Hybrid approaches balance flexibility with systemic assurance.
  • Over-reliance on traditional supervised exams risks reducing authenticity and equity.
  • Critical questions must guide reform: alignment across units, disciplinary variation, and student experience.
  • Assessment must reflect authentic professional practices where gen AI is legitimately used.
  • Ongoing collaboration and evidence-sharing across the sector are vital for sustainable reform.

Conclusion

The report concludes that assessment reform in the age of AI is not optional but essential. Institutions must move beyond short-term fixes and design assessment systems that assure learning, uphold integrity, and prepare students for future professional contexts. This requires thoughtful strategy, collaboration, and a willingness to reimagine assessment as a developmental, systemic, and values-driven practice.

Keywords

URL

https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/enacting-assessment-reform-time-artificial-intelligence

Summary generated by ChatGPT 5


Generative AI in Higher Education Teaching and Learning: Sectoral Perspectives


Source

Higher Education Authority

Summary

This report, commissioned by the Higher Education Authority (HEA), captures sector-wide perspectives on the impact of generative AI across Irish higher education. Through ten thematic focus groups and a leadership summit, it gathered insights from academic staff, students, support personnel, and leaders. The findings show that AI is already reshaping teaching, learning, assessment, and governance, but institutional responses remain fragmented and uneven. Participants emphasised the urgent need for national coordination, values-led policies, and structured capacity-building for both staff and students.

Key cross-cutting concerns included threats to academic integrity, the fragility of current assessment practices, risks of skill erosion, and unequal access. At the same time, stakeholders recognised opportunities for AI to enhance teaching, personalise learning, support inclusion, and free staff time for higher-value educational work. A consistent theme was that AI should not be treated merely as a technical disruption but as a pedagogical and ethical challenge that requires re-examining educational purpose.

Key Points

  • Sectoral responses to AI are fragmented; coordinated national guidance is urgently needed.
  • Generative AI challenges core values of authorship, originality, and academic integrity.
  • Assessment redesign is necessary—moving towards authentic, process-focused approaches.
  • Risks include skill erosion in writing, reasoning, and information literacy if AI is overused.
  • AI literacy for staff and students must go beyond tool use to include ethics and critical thinking.
  • Ethical use of AI requires shared principles, not just compliance or detection measures.
  • Inclusion is not automatic: without deliberate design, AI risks deepening inequality.
  • Staff feel underprepared and need professional development and institutional support.
  • Infrastructure challenges extend beyond tools to governance, procurement, and policy.
  • Leadership must shape educational vision, not just manage risk or compliance.

Conclusion

Generative AI is already embedded in higher education, raising urgent questions of purpose, integrity, and equity. The consultation shows both enthusiasm and unease, but above all a readiness to engage. The report concludes that a coordinated, values-led, and inclusive approach—balancing innovation with responsibility—will be essential to ensure AI strengthens, rather than undermines, Ireland’s higher education mission.

Keywords

URL

https://hea.ie/2025/09/17/generative-ai-in-higher-education-teaching-and-learning-sectoral-perspectives/

Summary generated by ChatGPT 5


How to use ChatGPT at university without cheating: ‘Now it’s more like a study partner’


Three university students (two male, one female) are seated at a table with laptops and books, smiling and engaged in discussion. Behind them, a large transparent screen displays a glowing blue humanoid AI figure pointing to various academic data and charts. The setting is a modern library, conveying a collaborative study environment where AI acts as a helpful, non-cheating resource. Generated by Nano Banana.
Moving beyond fears of academic dishonesty, many students are now leveraging ChatGPT as an ethical ‘study partner’ to enhance their learning experience at university. This image illustrates a collaborative approach where AI supports understanding and exploration, rather than providing shortcuts, thereby fostering a new era of academic assistance. Image generated by Nano Banana.

Source

The Guardian

Summary

Many students now treat ChatGPT less like a cheating shortcut and more like a study partner: for grammar checks, revision, practice questions, and organising notes. Usage jumped from 66% to 92% in a year. Universities are clarifying rules: AI can support study but not generate assignment content. Educators stress AI literacy, awareness of risks (hallucinations, fake references), and critical thinking to ensure AI complements rather than replaces learning.

Key Points

  • Student AI use rose from ~66% to ~92% in a year; viewed more as a partner than a cheat tool.
  • Valid uses: organising notes, summarising, and generating practice questions.
  • Risks: overreliance, hallucinations, using AI to write assignments still banned.
  • Some universities track AI usage or require logs; policies clearer.
  • Message: AI should be supplemental, not a substitute; build literacy and critical skills.

Keywords

URL

https://www.theguardian.com/education/2025/sep/14/how-to-use-chatgpt-at-university-without-cheating-now-its-more-like-a-study-partner

Summary generated by ChatGPT 5


‘It’s going to be a life skill’: educators discuss the impact of AI on university education


In a modern, sunlit conference room with a city view, a diverse group of seven educators in business attire are gathered around a sleek table. They are looking at a central holographic display that reads 'AI FLUENCY: A LIFE SKILL FOR 21ST CENTURY' and shows icons related to AI and learning. The scene depicts a discussion among professionals about the transformative impact of AI on university education. Generated by Nano Banana.
As AI reshapes industries and daily life, educators are converging to discuss its profound impact on university education, recognising AI fluency not merely as a technical skill but as an essential ‘life skill’ for the 21st century. This image captures a pivotal conversation among academic leaders focused on integrating AI into curricula to prepare students for the future. Image generated by Nano Banana.

Source

The Guardian

Summary

Educators argue that generative AI is swiftly moving from a novelty to a necessary skill, and universities must catch up. Students are often more advanced in AI usage than academic institutions, which are playing catch‑up with policy, curriculum adaptation, and support services. The piece emphasises that being able to use AI tools (and understand their limits) should be as fundamental as reading and writing. Universities are urged to incorporate AI literacy broadly—across disciplines—ensure equitable access, and ensure that teaching still reinforces enduring human skills like critical thinking, creativity, and communication.

Key Points

  • AI proficiency is becoming a life skill; many students already use AI tools, often more adeptly than institutions can respond.
  • Important for students to evaluate what AI can and can’t do, not just how to use it.
  • Universities should show leadership: clear AI strategy, support across all courses.
  • Equity matters: ensuring all students have access and skills to use AI.
  • Human skills (creativity, communication, thinking) retain their value even as AI tools become common.

Keywords

URL

https://www.theguardian.com/education/2025/sep/13/its-going-to-be-a-life-skill-educators-discuss-the-impact-of-ai-on-university-education

Summary generated by ChatGPT 5


Social media is teaching children how to use AI. How can teachers keep up?


A split image contrasting two scenes. On the left, three young children are engrossed in tablets and smartphones, surrounded by vibrant social media interfaces featuring AI-related content and hashtags like "#AIforkids." On the right, a teacher stands in a traditional classroom looking somewhat perplexed at a whiteboard with "AI?" written on it, while students sit at desks, symbolizing the challenge for educators to keep pace with children's informal AI learning. Image (and typos) generated by Nano Banana.
While children are rapidly learning about AI through pervasive social media platforms, educators face the challenge of integrating this knowledge into formal learning environments. This image highlights the growing disconnect between how children are acquiring AI literacy informally and the efforts teachers must make to bridge this gap and keep classroom instruction relevant and engaging. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Students are learning to use AI mainly through TikTok, Discord, and peer networks, while teachers rely on informal exchanges and LinkedIn. This creates quick but uneven knowledge transfer that often skips deeper issues such as bias, equity, and ethics. A Canadian pilot project showed that structured teacher education transforms enthusiasm into critical AI literacy, giving educators both vocabulary and judgment to integrate AI responsibly. The article stresses that without institutional clarity and professional development, AI adoption risks reinforcing inequity and mistrust.

Key Points

  • Informal learning (TikTok, Discord, staff rooms) drives AI uptake but lacks critical depth.
  • Teacher candidates benefit from structured AI education, gaining language and tools to discuss ethics and bias.
  • Institutional AI policies are fragmented, leaving instructors without support and creating confusion.
  • Equity and bias are central concerns; multilingual learners may be disadvantaged by uncritical AI use.
  • Embedding AI literacy in teacher education and learning communities is critical to move from casual adoption to critical engagement.

Keywords

URL

https://theconversation.com/social-media-is-teaching-children-how-to-use-ai-how-can-teachers-keep-up-264727

Summary generated by ChatGPT 5