This Professor Let Half His Class Use AI. Here’s What Happened


A split classroom scene with a professor in the middle, presenting data. The left side, labeled "GROUP A: WITH AI," shows disengaged students with "F" grades. The right side, labeled "GROUP B: NO AI," shows engaged students with "A+" grades, depicting contrasting outcomes of AI use in a classroom experiment. Image (and typos) generated by Nano Banana.
An academic experiment unfolds: Visualizing the stark differences in engagement and performance between students who used AI and those who did not, as observed by one professor. Image (and typos) generated by Nano Banana.

Source

Gizmodo

Summary

A study by University of Massachusetts Amherst professor Christian Rojas compared two sections of the same advanced economics course—one permitted structured AI use, the other did not. The results revealed that allowing AI under clear guidelines improved student engagement, confidence, and reflective learning but did not affect exam performance. Students with AI access reported greater efficiency and satisfaction with course design while developing stronger habits of self-correction and critical evaluation of AI outputs. Rojas concludes that carefully scaffolded AI integration can enrich learning experiences without fostering dependency or academic shortcuts, though larger studies are needed.

Key Points

  • Structured AI use increased engagement and confidence but not exam scores.
  • Students used AI for longer, more focused sessions and reflective learning.
  • Positive perceptions grew regarding efficiency and instructor quality.
  • AI integration encouraged editing, critical thinking, and ownership of ideas.
  • Researchers stress that broader trials are required to validate results.

Keywords

URL

https://gizmodo.com/this-professor-let-half-his-class-use-ai-heres-what-happened-2000678960

Summary generated by ChatGPT 5


Where Does Human Thinking End and AI Begin? An AI Authorship Protocol Aims to Show the Difference


A split image contrasting human and AI cognitive processes. On the left, a woman writes, surrounded by concepts like "HUMAN INTUITION" and "ORIGINAL THOUGHT." On the right, a man works at a computer, with "AI GENERATION" and "COMPUTATIONAL LOGIC" displayed. A central vertical bar indicates an "AUTHORSHIP PROTOCOL: 60% HUMAN / 40% AI." Image (and typos) generated by Nano Banana.
Decoding authorship: A visual representation of the intricate boundary between human creativity and AI generation, highlighting the need for protocols to delineate their contributions. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

Eli Alshanetsky, a philosophy professor at Temple University, warns that as AI-generated writing grows increasingly polished, the link between human reasoning and authorship is at risk of dissolving. To preserve academic and professional integrity, his team is piloting an “AI authorship protocol” that verifies human engagement during the creative process without resorting to surveillance or detection. The system embeds real-time reflective prompts and produces a secure “authorship tag” confirming that work aligns with specified AI-use rules. Alshanetsky argues this approach could serve as a model for ensuring accountability and trust across education, publishing, and professional fields increasingly shaped by AI.

Key Points

  • Advanced AI threatens transparency around human thought in writing and decision-making.
  • A new authorship protocol links student output to authentic reasoning.
  • The system uses adaptive AI prompts and verification tags to confirm engagement.
  • It avoids intrusive monitoring by building AI-use terms into the submission process.
  • The model could strengthen trust in professions dependent on human judgment.

Keywords

URL

https://theconversation.com/where-does-human-thinking-end-and-ai-begin-an-ai-authorship-protocol-aims-to-show-the-difference-266132

Summary generated by ChatGPT 5