AI is infiltrating the classroom. Here’s how teachers and students say they use it


A diverse group of students in a modern classroom interacting with laptops and holographic AI interfaces, while a teacher points to an interactive whiteboard displaying "AI." Image (and typos) generated by Nano Banana
AI is rapidly integrating into educational settings, transforming how both teachers and students engage with learning and information. This image visualizes the dynamic interaction between human instruction and artificial intelligence in a contemporary classroom environment. Image (and typos) generated by Nano Banana.

Source

The Los Angeles Times

Summary

Surveys and research suggest AI use is rising fast in education, with teachers and students showing different patterns of adoption and concern. Teachers tend to use AI for lesson preparation and administrative tasks, though many rarely use it in live instruction. Students lean on AI for concept explanation, research ideas, and summarising content, but worry about plagiarism risks, errant AI output, and negative academic judgments. The article surfaces a tension: AI can ease workloads and support learning, but its misuse or overreliance may erode creativity, trust, and academic integrity.

Key Points

  • About 27 % of teachers across multiple countries use AI weekly for lesson planning, though half of those rarely deploy it during class.
  • Teachers see AI as helpful in streamlining routine tasks but worry it may harm student originality and increase cheating.
  • Students use AI mainly to explain concepts, summarise articles, and suggest research—but 18 % admit using AI-generated text in assignments.
  • Two main deterrents for students: fear of being accused of academic misconduct, and concern about AI’s accuracy or bias.
  • The surge in student AI adoption (from 66 % to 92 % in one UK study) reveals the speed with which AI is becoming a study tool, not just a novelty.

Keywords

URL

https://www.latimes.com/california/story/2025-09-27/what-students-teachers-say-about-ai-school

Summary generated by ChatGPT 5


Colleges and Schools Must Block and Ban Agentic AI Browsers Now. Here’s Why


A group of students and a teacher in a library setting, with a prominent holographic display showing a red "blocked" symbol over an internet browser interface, symbolising the banning of agentic AI. Image (and typos) generated by Nano Banana.
The rise of agentic AI browsers presents new challenges for educational institutions. This image illustrates the urgent need for colleges and schools to implement blocking and banning measures to maintain academic integrity and a secure learning environment. Image (and typos) generated by Nano Banana.

Source

Forbes

Summary

Aviva Legatt warns that “agentic AI browsers” — tools able to log in, navigate, and complete tasks inside learning platforms — pose immediate risks to education. Unlike text-only AI, these can impersonate students or instructors, complete quizzes, grade assignments, and even bypass security like two-factor authentication. This creates threats not just of cheating but of data breaches and compliance failures under U.S. federal law. Faculty report “vaporised learning” when agents replace the effort needed to learn. Legatt urges institutions to block such browsers now, redesign assessments to resist automation, and treat agentic AI as an enterprise-level governance and security issue.

Key Points

  • Agentic browsers automate LMS tasks: logging in, completing quizzes, grading, posting feedback.
  • Risks extend beyond cheating to credential theft, data compromise, and federal compliance breaches.
  • Experiments show guardrails are easily bypassed, allowing unauthorised access and impersonation.
  • Faculty adapt by shifting to oral defences, handwritten tasks, and requiring drafts/reflections.
  • Recommended response: block tools, redesign assessments, embed governance, invest in AI literacy.

Keywords

URL

https://www.forbes.com/sites/avivalegatt/2025/09/25/colleges-and-schools-must-block-agentic-ai-browsers-now-heres-why/

Summary generated by ChatGPT 5


Enacting Assessment Reform in a Time of Artificial Intelligence


Source

Tertiary Education Quality and Standards Agency (TEQSA), Australian Government

Summary

This resource addresses how Australian higher education can reform assessment in response to the rise of generative AI. Building on earlier work (Assessment Reform for the Age of Artificial Intelligence), it sets out strategies that align with the Higher Education Standards Framework while acknowledging that gen AI is now ubiquitous in student learning and professional practice. The central message is that detection alone is insufficient; instead, assessment must be redesigned to assure learning authentically, ethically, and sustainably.

The report outlines three main pathways: (1) program-wide assessment reform, which integrates assessment as a coherent system across degrees; (2) unit/subject-level assurance of learning, where each subject includes at least one secure assessment task; and (3) a hybrid approach combining both. Each pathway carries distinct advantages and challenges, from institutional resourcing and staff coordination to maintaining program coherence and addressing integrity risks. Critical across all approaches is the need to balance immediate integrity concerns with long-term goals of preparing students for an AI-integrated future.

Key Points

  • Generative AI necessitates structural assessment reform, not reliance on detection.
  • Assessments must equip students to participate ethically and critically in an AI-enabled society.
  • Assurance of learning requires multiple, inclusive, and contextualised approaches.
  • Program-level reform provides coherence and alignment but demands significant institutional commitment.
  • Unit-level assurance offers quick implementation but risks fragmentation.
  • Hybrid approaches balance flexibility with systemic assurance.
  • Over-reliance on traditional supervised exams risks reducing authenticity and equity.
  • Critical questions must guide reform: alignment across units, disciplinary variation, and student experience.
  • Assessment must reflect authentic professional practices where gen AI is legitimately used.
  • Ongoing collaboration and evidence-sharing across the sector are vital for sustainable reform.

Conclusion

The report concludes that assessment reform in the age of AI is not optional but essential. Institutions must move beyond short-term fixes and design assessment systems that assure learning, uphold integrity, and prepare students for future professional contexts. This requires thoughtful strategy, collaboration, and a willingness to reimagine assessment as a developmental, systemic, and values-driven practice.

Keywords

URL

https://www.teqsa.gov.au/guides-resources/resources/corporate-publications/enacting-assessment-reform-time-artificial-intelligence

Summary generated by ChatGPT 5


Academics ‘marking students down’ when they suspect AI use


A concerned academic, wearing glasses, sits across from a student, both looking at a transparent tablet displaying 'AI detection suspected - Grade Adjusted' with code and charts. A laptop with an essay is open on the left, and a document with a large red 'X' is on the table, symbolizing suspicion of AI use in academic work. Generated by Nano Banana.
The rise of AI in education presents new challenges for assessment. This image visualizes the tension and scrutiny faced by students as academics grapple with suspected AI use in assignments, leading to difficult conversations and potential grade adjustments. Image generated by Nano Banana.

Source

Times Higher Education

Summary

A recent study of academics in China’s Greater Bay Area reveals that some lecturers are reducing student marks if they suspect AI use, even when the students have declared using it or when institutional policy allows such use. The research, involving 33 academics, highlights that ambiguity around what constitutes legitimate AI use and norms emphasising originality and independence, leads to inconsistent grading. Particularly in the humanities, suspicion of AI can lead to harsher penalties. The lack of explicit expectations communicated to students exacerbates the issue, risking distrust and undermining the credibility of academic grading unless clearer standards are established.

Key Points

  • Academics are sometimes deducting marks based on suspicion of AI use, despite declared or permitted use.
  • The study involved 33 academics, many of whom report tension between policies that permit AI and traditional values of originality and independence.
  • Humanities lecturers are more likely to penalise AI-use suspicion than those in other disciplines.
  • Many institutions lack clear policies; expectations about AI use are often implicit, not explicitly communicated to students.
  • Without clarity, there is a risk of unfair marking, loss of trust between students and staff, and damage to the credibility of academic certifications.

Keywords

URL

https://www.timeshighereducation.com/news/academics-marking-students-down-when-they-suspect-ai-use

Summary generated by ChatGPT 5


Students’ complicated relationship with AI: ‘It’s inherently going against what college is’


A student stands in a grand, traditional library, looking conflicted between two glowing holographic displays. To the left, a blue 'AI: EFFICIENCY' display shows data and code. To the right, an orange 'COLLEGE: UNDERSTANDING' display hovers over an open book and desk lamp. The image symbolizes the internal conflict students face regarding AI in academia. Generated by Nano Banana.
Navigating the academic world with new AI tools presents a complex dilemma for students. This image illustrates the tension between the efficiency offered by AI and the foundational pursuit of deep understanding inherent to college education. It captures the internal debate students face as technology challenges traditional learning. Image generated by and typos courtesy of Nano Banana.

Source

The Irish Times

Summary

Many students express tension between using generative AI (GenAI) tools like ChatGPT and the traditional values of university education. Some avoid AI because they feel it undermines academic integrity or the effort they invested; others see benefit in using it for organising study, generating ideas, or off-loading mundane parts of coursework. Concerns include fairness (getting better grades for less effort), accuracy of chatbot-generated content, and environmental impact. Students also worry about loss of critical thinking and the changing nature of assignments as AI becomes more common. There is a call for clearer institutional guidelines, more awareness of policies, and equitable access and use.

Key Points

  • Using GenAI can feel like “offloading work,” conflicting with the idea of self-learning which many students believe defines college life.
  • Students worry about fairness: those who use AI may gain advantage over those who do not.
  • Accuracy is a concern: ChatGPT sometimes provides false information; students are aware of this risk.
  • Some students avoid using AI to avoid suspicion or accusation of cheating, even when not using it.
  • Others find helpful uses: organising references, creating study timetables, acting as a “second pair of eyes” or “study companion.”

Keywords

URL

https://www.irishtimes.com/life-style/people/2025/09/20/students-complicated-relationship-with-ai-chatbots-its-inherently-going-against-what-college-is/

Summary generated by ChatGPT 5