This Professor Let Half His Class Use AI. Here’s What Happened


A split classroom scene with a professor in the middle, presenting data. The left side, labeled "GROUP A: WITH AI," shows disengaged students with "F" grades. The right side, labeled "GROUP B: NO AI," shows engaged students with "A+" grades, depicting contrasting outcomes of AI use in a classroom experiment. Image (and typos) generated by Nano Banana.
An academic experiment unfolds: Visualizing the stark differences in engagement and performance between students who used AI and those who did not, as observed by one professor. Image (and typos) generated by Nano Banana.

Source

Gizmodo

Summary

A study by University of Massachusetts Amherst professor Christian Rojas compared two sections of the same advanced economics course—one permitted structured AI use, the other did not. The results revealed that allowing AI under clear guidelines improved student engagement, confidence, and reflective learning but did not affect exam performance. Students with AI access reported greater efficiency and satisfaction with course design while developing stronger habits of self-correction and critical evaluation of AI outputs. Rojas concludes that carefully scaffolded AI integration can enrich learning experiences without fostering dependency or academic shortcuts, though larger studies are needed.

Key Points

  • Structured AI use increased engagement and confidence but not exam scores.
  • Students used AI for longer, more focused sessions and reflective learning.
  • Positive perceptions grew regarding efficiency and instructor quality.
  • AI integration encouraged editing, critical thinking, and ownership of ideas.
  • Researchers stress that broader trials are required to validate results.

Keywords

URL

https://gizmodo.com/this-professor-let-half-his-class-use-ai-heres-what-happened-2000678960

Summary generated by ChatGPT 5


The Case Against AI Disclosure Statements


A large tablet displaying an "AI Disclosure Statement" document with a prominent red "X" over it sits on a wooden desk in a courtroom setting. A gavel lies next to the tablet, and a judge's bench with scales of justice is visible in the background. Image (and typos) generated by Nano Banana.
Challenging transparency: A visual argument against mandatory AI disclosure statements, set against the backdrop of legal scrutiny. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

Julie McCown, an associate professor of English at Southern Utah University, argues that mandatory AI disclosure statements in higher education are counterproductive. Initially designed to promote transparency and responsible use, these statements have instead reinforced a culture of guilt, distrust, and surveillance. McCown contends that disclosure requirements stigmatise ethical AI use and inhibit open dialogue between students and educators. Rather than policing AI use, she advocates normalising it within learning environments, rethinking assessment design, and fostering trust. Transparency, she suggests, emerges from safety and shared experimentation, not coercion.

Key Points

  • Mandatory AI disclosure creates a culture of confession and distrust.
  • Research shows disclosure reduces perceived trustworthiness regardless of context.
  • Anti-AI bias drives use underground and suppresses AI literacy.
  • Assignments should focus on quality and integrity of writing, not AI detection.
  • Normalising AI through reflective practice and open discussion builds genuine transparency.

Keywords

URL

https://www.insidehighered.com/opinion/views/2025/10/28/case-against-ai-disclosure-statements-opinion

Summary generated by ChatGPT 5


Dartmouth Builds Its Own AI Chatbot for Student Well-Being


A close-up of a digital display screen showing a friendly AI chatbot interface titled "DARTMOUTH COMPANION." The chatbot has an avatar of a friendly character wearing a green scarf with the Dartmouth shield. Text bubbles read "Hi there! I'm here to support you. How you feeling today?" with clickable options like "Stress," "Social Life," and "Academics." In the blurred background, several college students are visible in a modern, comfortable common area, working on laptops and chatting, suggesting a campus environment. The Dartmouth logo (pine tree) is visible at the bottom of the screen. Image (and typos) generated by Nano Banana.
Dartmouth College takes a proactive step in student support by developing its own AI chatbot, “Dartmouth Companion.” This innovative tool aims to provide accessible assistance and resources for student well-being, addressing concerns from academics to social life. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

Dartmouth College is developing Evergreen, a student-designed AI chatbot aimed at improving mental health and well-being on campus. Led by Professor Nicholas Jacobson, the project involves more than 130 undergraduates contributing research, dialogue, and content creation to make the chatbot conversational and evidence-based. Evergreen offers tailored guidance on health topics such as exercise, sleep, and time management, using opt-in data from wearables and campus systems. Unlike third-party wellness apps, it is student-built, privacy-focused, and designed to intervene early when students show signs of distress. A trial launch is planned for autumn 2026, with potential for wider adoption across universities.

Key Points

  • Evergreen is a Dartmouth-built AI chatbot designed to support student well-being.
  • Over 130 undergraduate researchers are developing its conversational features.
  • The app personalises feedback using student-approved data such as sleep and activity.
  • Safety features alert a self-identified support team if a user is in crisis.
  • The first controlled trial is set for 2026, with plans to share the model with other colleges.

Keywords

URL

https://www.insidehighered.com/news/student-success/health-wellness/2025/10/14/dartmouth-builds-its-own-ai-chatbot-student-well

Summary generated by ChatGPT 5