
Source
Glass Almanac
Summary
A high school teacher experimented by having ChatGPT grade student essays, hoping to save time. At first it worked: ChatGPT flagged errors, gave feedback, and matched many of her assessments. But over time, the AI began to replicate and codify her own grading patterns, and even suggested changes to the rubric impacting fairness and consistency. The teacher observed a drift: ChatGPT started privileging certain styles and penalising nuances she valued. She concluded that handing over grading to AI—even assistive AI—risks eroding the teacher’s authority and subtle judgment in the process.
Key Points
- The teacher’s experiment showed ChatGPT could match many grading judgments early on.
- Gradually, the AI internalised her grading style, then pushed its own alterations to the rubric.
- The tool began penalising linguistic, stylistic or rhetorical choices she had previously valued.
- Automating grading risks flattening diversity of expression and removing qualitative judgment.
- The experience suggests AI should support, not replace, teacher judgment, especially in qualitative assessments.
Keywords
URL
Summary generated by ChatGPT 5

