AI Systems and Humans ‘See’ the World Differently – and That’s Why AI Images Look So Garish


A split image of a rolling green landscape under a sky with clouds. The left side, labeled "HUMAN VISION," shows a natural, soft-lit scene with realistic colors. The right side, labeled "AI PERCEPTION," depicts the exact same landscape but with intensely saturated, almost neon, and unrealistic colors, particularly in the foreground grass which glows with a rainbow of hues. A stark, jagged white line divides the two halves, and subtle digital code overlays the AI side. The central text reads "HOW AI SEES THE WORLD." Image (and typos) generated by Nano Banana.
Ever wonder why AI-generated images sometimes have a unique, almost unnatural vibrancy? This visual contrast highlights the fundamental differences in how AI systems and human perception process and interpret visual information, explaining the often “garish” aesthetic of AI art. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

T. J. Thomson explores how artificial intelligence perceives the visual world in ways that diverge sharply from human vision. His study, published in Visual Communication, compares AI-generated images with human-created illustrations and photographs to reveal how algorithms process and reproduce visual information. Unlike humans, who interpret colour, depth, and cultural context, AI relies on mathematical patterns, metadata, and comparisons across large image datasets. As a result, AI-generated visuals tend to be boxy, oversaturated, and generic – reflecting biases from stock photography and limited training diversity. Thomson argues that understanding these differences can help creators choose when to rely on AI for efficiency and when human vision is needed for authenticity and emotional impact.

Key Points

  • AI perceives visuals through data patterns and metadata, not sensory interpretation.
  • AI-generated images ignore cultural and contextual cues and default to photorealism.
  • Colours and shapes in AI images are often exaggerated or artificial due to training biases.
  • Human-made images evoke authenticity and emotional engagement that AI versions lack.
  • Knowing when to use AI or human vision is key to effective visual communication.

Keywords

URL

https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

Summary generated by ChatGPT 5


Admissions Essays Written by AI Are Generic and Easy to Spot


In a grand, wood-paneled library office, a serious female admissions officer in glasses sits at a desk piled with papers and laptops. A prominent holographic alert floats in front of her, reading "AI-GENERATED ESSAY DETECTED" in red. Below it, a comparison lists characteristics of "HUMAN" writing (e.g., unique voice) versus generic AI traits. One laptop screen displays "AI Detection Software" with a high probability score. Image (and typos) generated by Nano Banana.
Despite sophisticated AI capabilities, admissions essays generated by artificial intelligence are often characterised by generic phrasing and a distinct lack of personal voice, making them relatively easy to spot. This image depicts an admissions officer using AI detection software and her own critical judgment to identify an AI-generated essay, underscoring the challenges and tools in maintaining authenticity in student applications. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

Cornell University researchers have found that AI-generated college admission essays are noticeably generic and easily distinguished from human writing. In a study comparing 30,000 human-written essays with AI-generated versions, the latter often failed to convey authentic personal narratives. When researchers added personal details for context, AI tools tended to overemphasise keywords, producing essays that sounded even more mechanical. While the study’s authors note that AI can be helpful for editing and feedback, they warn against using it to produce full drafts. The team also developed a detection model that could identify AI-generated essays with near-perfect accuracy.

Key Points

  • Cornell researchers compared AI and human-written college admission essays.
  • AI-generated essays lacked authenticity and were easily recognised.
  • Adding personal traits often made AI writing sound more artificial.
  • AI can provide useful feedback for weaker writers but not full essays.
  • A detection model identified AI-written essays with high accuracy.

Keywords

URL

https://www.insidehighered.com/news/quick-takes/2025/10/06/admissions-essays-written-ai-are-generic-and-easy-spot

Summary generated by ChatGPT 5


We are lecturers in Trinity College Dublin. We see it as our responsibility to resist AI


Five distinguished individuals, appearing as senior academics in traditional robes, stand solemnly behind a large wooden table in an ornate, historic library. In front of them, a glowing orange holographic screen displays 'AI' with complex data and schematics. The scene conveys a sense of responsibility and potential resistance to AI within a venerable academic institution. Generated by Nano Banana.
In the hallowed halls of institutions like Trinity College Dublin, some educators are taking a principled stand, viewing it as their inherent responsibility to critically engage with and even resist the pervasive integration of AI into academic life. This image reflects a serious, considered approach to safeguarding traditional educational values amidst technological change. Image generated by Nano Banana.

Source

The Irish Times

Summary

Lecturers at Trinity College Dublin argue that even if all technical and ethical issues around generative AI were resolved, the use of GenAI still undermines fundamental elements of university education: fostering authentic human thinking, cultivating critique, and resisting the commodification of learning. They emphasise that GenAI produces plausible but shallow output, contributes to environmental and ethical harms, and can flatten student voice. The authors believe universities should reject the narrative that GenAI’s integration is inevitable, and instead double down on preserving human-centered pedagogies, critical thinking, and academic values.

Key Points

  • GenAI produces plausible but often shallow/false output; lacks true understanding.
  • Ethical, environmental, and social harms are tied to GenAI use.
  • Even with perfect versions, GenAI undermines authentic student thinking and writing.
  • Narratives of inevitability are resisted: universities can choose otherwise.
  • Universities should reaffirm critical, human intellectual labour and values.

Keywords

URL

https://www.irishtimes.com/opinion/2025/09/04/opinion-we-are-lecturers-in-trinity-college-we-see-it-as-our-responsibility-to-resist-ai/

Summary generated by ChatGPT 5