
Source
Inside Higher Ed
Summary
Cornell University researchers have found that AI-generated college admission essays are noticeably generic and easily distinguished from human writing. In a study comparing 30,000 human-written essays with AI-generated versions, the latter often failed to convey authentic personal narratives. When researchers added personal details for context, AI tools tended to overemphasise keywords, producing essays that sounded even more mechanical. While the study’s authors note that AI can be helpful for editing and feedback, they warn against using it to produce full drafts. The team also developed a detection model that could identify AI-generated essays with near-perfect accuracy.
Key Points
- Cornell researchers compared AI and human-written college admission essays.
- AI-generated essays lacked authenticity and were easily recognised.
- Adding personal traits often made AI writing sound more artificial.
- AI can provide useful feedback for weaker writers but not full essays.
- A detection model identified AI-written essays with high accuracy.
Keywords
URL
Summary generated by ChatGPT 5