AI training becomes mandatory at more US law schools


In a classic, wood-paneled law school lecture hall, a professor stands at the front addressing a large class of students, all working on laptops. Behind the professor, a large, glowing blue holographic screen displays 'MANDATORY AI LEGAL TRAINING: FALL 2025 CURRICULUM' along with complex flowcharts and data related to AI and legal analysis. The scene signifies the integration of AI training into legal education. Generated by Nano Banana.
As the legal landscape rapidly evolves with AI advancements, more US law schools are making AI training a mandatory component of their curriculum. This image captures a vision of future legal education, where students are equipped with essential AI skills to navigate and practice law in a technologically transformed world. Image (and typos) generated by Nano Banana.

Source

Reuters

Summary

A growing number of U.S. law schools are making AI training compulsory, embedding it into first-year curricula to better equip graduates for the evolving legal sector. Instead of resisting AI, institutions like Fordham and Arizona State now include exercises (e.g. comparing AI-generated vs. professor-written legal analyses) in orientation and foundational courses. These programmes teach model mechanics, prompt design, and ethical risks like hallucinations. Legal educators believe AI fluency is fast becoming a baseline competency for future attorneys, driven by employer expectations and emerging norms in legal practice.

Key Points

  • At least eight law schools now require AI training in first-year orientation or core courses.
  • Fordham’s orientation exercise had students compare a ChatGPT-drafted legal summary vs. a professor’s.
  • Schools cover how AI works, its limitations and errors, and responsible prompt practices.
  • The shift signals a move from seeing AI as cheating risk to accepting it as a core legal skill.
  • Legal employers endorse this direction, arguing new lawyers need baseline AI literacy to be effective.

Keywords

URL

https://www.reuters.com/legal/legalindustry/ai-training-becomes-mandatory-more-us-law-schools-2025-09-22/

Summary generated by ChatGPT 5


Generative AI in Higher Education Teaching and Learning: Sectoral Perspectives


Source

Higher Education Authority

Summary

This report, commissioned by the Higher Education Authority (HEA), captures sector-wide perspectives on the impact of generative AI across Irish higher education. Through ten thematic focus groups and a leadership summit, it gathered insights from academic staff, students, support personnel, and leaders. The findings show that AI is already reshaping teaching, learning, assessment, and governance, but institutional responses remain fragmented and uneven. Participants emphasised the urgent need for national coordination, values-led policies, and structured capacity-building for both staff and students.

Key cross-cutting concerns included threats to academic integrity, the fragility of current assessment practices, risks of skill erosion, and unequal access. At the same time, stakeholders recognised opportunities for AI to enhance teaching, personalise learning, support inclusion, and free staff time for higher-value educational work. A consistent theme was that AI should not be treated merely as a technical disruption but as a pedagogical and ethical challenge that requires re-examining educational purpose.

Key Points

  • Sectoral responses to AI are fragmented; coordinated national guidance is urgently needed.
  • Generative AI challenges core values of authorship, originality, and academic integrity.
  • Assessment redesign is necessary—moving towards authentic, process-focused approaches.
  • Risks include skill erosion in writing, reasoning, and information literacy if AI is overused.
  • AI literacy for staff and students must go beyond tool use to include ethics and critical thinking.
  • Ethical use of AI requires shared principles, not just compliance or detection measures.
  • Inclusion is not automatic: without deliberate design, AI risks deepening inequality.
  • Staff feel underprepared and need professional development and institutional support.
  • Infrastructure challenges extend beyond tools to governance, procurement, and policy.
  • Leadership must shape educational vision, not just manage risk or compliance.

Conclusion

Generative AI is already embedded in higher education, raising urgent questions of purpose, integrity, and equity. The consultation shows both enthusiasm and unease, but above all a readiness to engage. The report concludes that a coordinated, values-led, and inclusive approach—balancing innovation with responsibility—will be essential to ensure AI strengthens, rather than undermines, Ireland’s higher education mission.

Keywords

URL

https://hea.ie/2025/09/17/generative-ai-in-higher-education-teaching-and-learning-sectoral-perspectives/

Summary generated by ChatGPT 5


How AI Is Changing—Not ‘Killing’—College


A diverse group of college students is gathered in a modern university library or common area, with some holding tablets or looking at laptops. Above them, a large, glowing word cloud hovers, filled with terms related to artificial intelligence and its impact. Prominent words include "HELPFUL," "FUTURE," "ETHICS," "CHEATING," "BIAS," and "CONCERNING," reflecting a range of student opinions. The overall impression is one of active discussion and varied perspectives on AI. Image (and typos) generated by Nano Banana.
What do the next generation of leaders and innovators think about artificial intelligence? This visual captures the dynamic and often contrasting views of college students on AI’s role in their education, future careers, and daily lives. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

A new Student Voice survey by Inside Higher Ed and Generation Lab captures how U.S. college students are adapting to generative AI in their studies and what they expect from institutions. Of the 1,047 students surveyed, 85 per cent had used AI tools in the past year—mainly for brainstorming, tutoring, and studying—while only a quarter admitted to using them for completing assignments. Most respondents called for universities to provide education on ethical AI use and clearer, standardised policies, rather than policing or banning the technology. Although students are divided about AI’s impact on critical thinking, most agree it can enhance learning if used responsibly. The majority do not view AI as diminishing the value of college; some even see it as increasing it.

Key Points

  • 85 per cent of students have used AI tools for coursework, mainly for brainstorming and study support.
  • 97 per cent want universities to respond to AI’s impact on academic integrity through education, not restriction.
  • Over half say AI has mixed effects on critical thinking; 27 per cent find it enhances learning.
  • Students want institutions to offer professional and ethical AI training, not leave it to individual faculty.
  • Only 18 per cent believe AI reduces the value of college; 23 per cent say it increases it.

Keywords

URL

https://www.insidehighered.com/news/students/academics/2025/08/29/survey-college-students-views-ai

Summary generated by ChatGPT 5


Explainable AI in education: Fostering human oversight and shared responsibility


Source

The European Digital Education Hub

Summary

This European Digital Education Hub report explores how explainable artificial intelligence (XAI) can support trustworthy, ethical, and effective AI use in education. XAI is positioned as central to ensuring transparency, fairness, accountability, and human oversight in educational AI systems. The document frames XAI within EU regulations (AI Act, GDPR, Digital Services Act, etc.), highlighting its role in protecting rights while fostering innovation. It stresses that explanations of AI decisions must be understandable, context-sensitive, and actionable for learners, educators, policy-makers, and developers alike.

The report emphasises both the technical and human dimensions of XAI, defining four key concepts: transparency, interpretability, explainability, and understandability. Practical applications include intelligent tutoring systems and AI-driven lesson planning, with case studies showing how different stakeholders perceive risks and benefits. A major theme is capacity-building: educators need new competences to critically assess AI, integrate it responsibly, and communicate its role to students. Ultimately, XAI is not only a technical safeguard but a pedagogical tool that fosters agency, metacognition, and trust.

Key Points

  • XAI enables trust in AI by making systems transparent, interpretable, explainable, and understandable.
  • EU frameworks (AI Act, GDPR) require AI systems in education to meet legal standards of fairness, accountability, and transparency.
  • Education use cases include intelligent tutoring systems and lesson-plan generators, where human oversight remains critical.
  • Stakeholders (educators, learners, developers, policymakers) require tailored explanations at different levels of depth.
  • Teachers need competences in AI literacy, critical thinking, and the ethical use of XAI tools.
  • Explanations should align with pedagogical goals, fostering self-regulated learning and student agency.
  • Risks include bias, opacity of data-driven models, and threats to academic integrity if explanations are weak.
  • Opportunities lie in supporting inclusivity, accessibility, and personalised learning.
  • Collaboration between developers, educators, and authorities is essential to balance innovation with safeguards.
  • XAI in education is about shared responsibility—designing systems where humans remain accountable and learners remain empowered.

Conclusion

The report concludes that explainable AI is a cornerstone for trustworthy AI in education. It bridges technical transparency with human understanding, ensuring compliance with EU laws while empowering educators and learners. By embedding explainability into both AI design and classroom practice, education systems can harness AI’s benefits responsibly, maintaining fairness, accountability, and human agency.

Keywords

URL

https://knowledgeinnovation.eu/kic-publication/explainable-ai-in-education-fostering-human-oversight-and-shared-responsibility/

Summary generated by ChatGPT 5


New Horizons for Higher Education: Teaching and Learning with Generative AI


Source

N-TUTORR National Digital Leadership Network (NDLN) – Professor Mairéad Pratschke

Summary

This report examines how generative AI (GAI) is transforming higher education, presenting both opportunities and risks. It highlights three main areas: the impact of GAI on current teaching, assessment, and learner-centred practice; the development of emerging AI pedagogy, international best practice, and early research findings; and the broader context of digital transformation, regulation, and future skills. The analysis stresses that while GAI can enhance accessibility, personalisation, and engagement, it also raises critical concerns around academic integrity, bias, equity, and sustainability.

The report positions GAI as a general-purpose technology akin to the internet or electricity, reshaping the nature of knowledge and collaboration in higher education. It calls for institutional leaders to align AI adoption with sectoral values such as inclusion, integrity, and social responsibility, while also addressing infrastructure gaps, staff training, and regulatory compliance. To be effective, GAI use must be pedagogically aligned, ethically grounded, and strategically supported. The future success of higher education depends on preparing students not just to use AI, but to work with it critically, creatively, and responsibly.

Key Points

  • GAI challenges academic integrity but also enables personalised learning at scale.
  • Pedagogical alignment is essential: AI must support, not replace, learning processes.
  • Early research warns of overreliance and “cognitive offloading” without human oversight.
  • AI can widen inequities unless digital equity and inclusion are prioritised.
  • Institutional strategy must balance efficiency with effectiveness in learning design.
  • National and EU regulation (e.g., AI Act) set high standards for responsible AI use.
  • Frontier AI models offer powerful capabilities but raise issues of bias and safety.
  • Educators increasingly take on roles as AI tool designers and facilitators.
  • Collaboration with industry is crucial for future career alignment and skills.
  • Sustained investment in infrastructure, training, and AI literacy is required.

Conclusion

Generative AI represents a transformative force in higher education. Its integration offers significant potential to augment human learning and expand access, but only if guided by values-led leadership, pedagogical rigour, and robust governance. Institutions must act strategically, embedding AI literacy and ethical practice to ensure that this “new horizon” supports both student success and the future sustainability of higher education.

Keywords

URL

https://www.ndln.ie/teaching-and-learning-with-generative-ai

Summary generated by ChatGPT 5