Latest Posts

Dartmouth Builds Its Own AI Chatbot for Student Well-Being


A close-up of a digital display screen showing a friendly AI chatbot interface titled "DARTMOUTH COMPANION." The chatbot has an avatar of a friendly character wearing a green scarf with the Dartmouth shield. Text bubbles read "Hi there! I'm here to support you. How you feeling today?" with clickable options like "Stress," "Social Life," and "Academics." In the blurred background, several college students are visible in a modern, comfortable common area, working on laptops and chatting, suggesting a campus environment. The Dartmouth logo (pine tree) is visible at the bottom of the screen. Image (and typos) generated by Nano Banana.
Dartmouth College takes a proactive step in student support by developing its own AI chatbot, “Dartmouth Companion.” This innovative tool aims to provide accessible assistance and resources for student well-being, addressing concerns from academics to social life. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

Dartmouth College is developing Evergreen, a student-designed AI chatbot aimed at improving mental health and well-being on campus. Led by Professor Nicholas Jacobson, the project involves more than 130 undergraduates contributing research, dialogue, and content creation to make the chatbot conversational and evidence-based. Evergreen offers tailored guidance on health topics such as exercise, sleep, and time management, using opt-in data from wearables and campus systems. Unlike third-party wellness apps, it is student-built, privacy-focused, and designed to intervene early when students show signs of distress. A trial launch is planned for autumn 2026, with potential for wider adoption across universities.

Key Points

  • Evergreen is a Dartmouth-built AI chatbot designed to support student well-being.
  • Over 130 undergraduate researchers are developing its conversational features.
  • The app personalises feedback using student-approved data such as sleep and activity.
  • Safety features alert a self-identified support team if a user is in crisis.
  • The first controlled trial is set for 2026, with plans to share the model with other colleges.

Keywords

URL

https://www.insidehighered.com/news/student-success/health-wellness/2025/10/14/dartmouth-builds-its-own-ai-chatbot-student-well

Summary generated by ChatGPT 5


AI Systems and Humans ‘See’ the World Differently – and That’s Why AI Images Look So Garish


A split image of a rolling green landscape under a sky with clouds. The left side, labeled "HUMAN VISION," shows a natural, soft-lit scene with realistic colors. The right side, labeled "AI PERCEPTION," depicts the exact same landscape but with intensely saturated, almost neon, and unrealistic colors, particularly in the foreground grass which glows with a rainbow of hues. A stark, jagged white line divides the two halves, and subtle digital code overlays the AI side. The central text reads "HOW AI SEES THE WORLD." Image (and typos) generated by Nano Banana.
Ever wonder why AI-generated images sometimes have a unique, almost unnatural vibrancy? This visual contrast highlights the fundamental differences in how AI systems and human perception process and interpret visual information, explaining the often “garish” aesthetic of AI art. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

T. J. Thomson explores how artificial intelligence perceives the visual world in ways that diverge sharply from human vision. His study, published in Visual Communication, compares AI-generated images with human-created illustrations and photographs to reveal how algorithms process and reproduce visual information. Unlike humans, who interpret colour, depth, and cultural context, AI relies on mathematical patterns, metadata, and comparisons across large image datasets. As a result, AI-generated visuals tend to be boxy, oversaturated, and generic – reflecting biases from stock photography and limited training diversity. Thomson argues that understanding these differences can help creators choose when to rely on AI for efficiency and when human vision is needed for authenticity and emotional impact.

Key Points

  • AI perceives visuals through data patterns and metadata, not sensory interpretation.
  • AI-generated images ignore cultural and contextual cues and default to photorealism.
  • Colours and shapes in AI images are often exaggerated or artificial due to training biases.
  • Human-made images evoke authenticity and emotional engagement that AI versions lack.
  • Knowing when to use AI or human vision is key to effective visual communication.

Keywords

URL

https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

Summary generated by ChatGPT 5


Experts Warn AI Could Reshape Teen Brains


A focused teenage boy looks down at a glowing digital tablet displaying complex data. Above his head, a bright blue, intricate holographic representation of a human brain pulsates with interconnected data points and circuits, symbolizing the impact of technology. In the blurred background, several adult figures in professional attire stand, observing the scene, representing the "experts." Image (and typos) generated by Nano Banana.
As artificial intelligence becomes increasingly integrated into daily life, experts are raising concerns about its potential long-term effects on the developing brains of teenagers. Explore the warnings and discussions surrounding AI’s influence on cognitive development and neural pathways. Image (and typos) generated by Nano Banana.

Source

CNBC

Summary

Ernestine Siu reports growing concern among scientists and regulators that prolonged use of generative AI by children and teenagers could alter brain development and weaken critical thinking skills. A 2025 MIT Media Lab study found that reliance on large language models (LLMs) such as ChatGPT reduced neural connectivity compared with unaided writing tasks, suggesting “cognitive debt” from over-dependence on external support. Researchers warn that early exposure may limit creativity, self-regulation, and critical analysis, while privacy and emotional risks also loom large as children anthropomorphise AI companions. Experts urge limits on generative AI use among young people, stronger parental oversight, and the cultivation of both AI and digital literacy to safeguard cognitive development and wellbeing.

Key Points

  • One in four U.S. teens now use ChatGPT for schoolwork, double the 2023 rate.
  • MIT researchers found reduced brain network activity in users relying on LLMs.
  • Overuse of AI may lead to “cognitive debt” and hinder creativity and ownership of work.
  • Younger users are particularly vulnerable to emotional and privacy risks.
  • Experts recommend age-appropriate AI design, digital literacy training, and parental engagement.

Keywords

URL

https://www.cnbc.com/2025/10/13/experts-warn-ai-llm-chatgpt-gemini-perplexity-claude-grok-copilot-could-reshape-teen-youth-brains.html

Summary generated by ChatGPT 5


New elephants in the GenerativeAI room? Acknowledging the costs of GenAI to develop ‘critical AI literacy’

by Sue Beckingham, NTF PFHEA – Sheffield Hallam University and Peter Hartley NTF – Edge Hill University
Estimated reading time: 8 minutes
Image created using DALLE-2 2024 – Reused to save cost

The GenAI industry regularly proclaims that the ‘next release’ of the chatbot of your choice will get closer to its ultimate goal – Artificial General Intelligence (AGI) – where AI can complete the widest range of tasks better than the best humans.

Are we providing sufficient help and support to our colleagues and students to understand and confront the implications of this direction of travel?

Or is AGI either an improbable dream or the ultimate threat to humanity?

Along with many (most?) GenAI users, we have seen impressive developments but not yet seen apps demonstrating anything close to AGI. OpenAI released GPT-5 in 2025 and Sam Altman (CEO) enthused: “GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.” But critical reaction to this new model was very mixed and he had to backtrack, admitting that the launch was “totally screwed up”. Hopefully, this provides a bit of breathing space for Higher Education – an opportunity to review how we encourage staff and students to adopt an appropriately critical and analytic perspective on GenAI – what we would call ‘critical AI literacy’.

Acknowledging the costs of Generative AI

Critical AI literacy involves understanding how to use GenAI responsibly and ethically – knowing when and when not to use it, and the reasons why. One elephant in the room is that GenAI incurs costs, and we need to acknowledge these.

Staff and students should be aware of ongoing debates on GenAI’s environmental impact, especially given increasing pressures to develop GenAI as your ‘always-on/24-7’ personal assistant. Incentives to treat GenAI as a ‘free’ service have increased with OpenAI’s move into education, offering free courses and certification. We also see increasing pressure to integrate GenAI into pre-university education, as illustrated by the recent ‘Back to School’ AI Summit 2025 and accompanying book, which promises a future of ‘creativity unleashed’.

We advocate a multi-factor definition of the ‘costs’ of GenAI so we can debate its capabilities and limitations from the broadest possible perspective. For example, we must evaluate opportunity costs to users. Recent research, including brain scans on individual users, found that over-use of GenAI (or specific patterns of use) can have definite negative impact on users’ cognitive capacities and performance, including metacognitive laziness and cognitive debt. We group costs into four key areas: cost to the individual, to the environment, to knowledge and cost to future jobs.

Cost of Generative AI to the individual, environment, knowledge and future jobs
(Beckingham and Hartley, 2025)

Cost to the individual

Fees: subscription fees for GenAI tools range from free for the basic version through to different levels of paid upgrades (Note: subscription tiers are continually changing). Premium models such as enterprise AI assistants are costly, limiting access to businesses or high-income users.

Accountability: Universities must provide clear guidelines on what can and cannot be shared with these tools, along with the concerns and implications of infringing copyright.

Over-reliance: Outcomes for learning depend on how GenAI apps are used. If students rely on AI-generated content too heavily or exclusively, they can make poor decisions, with a detrimental effect on skills.

Safety and mental health: Increased use of personal assistants providing ‘personal advice’ for socioemotional purposes can lead to increased social isolation

Cost to the environment

Energy consumption – The infrastructure used for training and deploying Large Language Models (LLMs) requires millions of GPU hours to train, and increases substantially for image generation. The growth of data centres also creates concerns for energy supply.

Emissions and carbon footprint – Developing the technology creates emissions through the mining, manufacturing, transport and recycling processes

Water consumption – Water needed for cooling in the data centres equates to millions of gallons per day

e-Waste – This includes toxic materials (e.g. lead, barium, arsenic and chromium) in components within ever-increasing LLM servers. Obsolete servers generate substantial toxic emissions if not recycled properly.

Cost to knowledge

Erosion of expertise – Data is trained on information publicly available on the internet, from formal partnerships with third parties, and information that users or human trainers and researchers provide or generate.

Ethics – Ethical concerns highlight the lived experiences of those employed in data annotation and content moderation of text, images and video to remove toxic content.

MisinformationIndiscriminate data scraping from blogs, social media, and news sites, coupled with text entered by users of LLMs, can result in ‘regurgitation’of personal data, hallucinations and deepfakes.

BiasAlgorithmic bias and discrimination occurs when LLMs inherit social patterns, perpetuating stereotypes relating to gender, race, disability and protected characteristics

Cost to future jobs

Job displacement – GenAI is “reshaping industries and tasks across all sectors”, driving business transformation. But will these technologies replace rather than augment human work?

Job matching – Increased use of AI in recruitment and by jobseekers creates risks that GenAI is misrepresenting skills. This creates challenges for job-seeker profile analysers to accurately identify skills with candidates that can genuinely evidence them.

New skillsReskilling and upskilling in AI and big data tops the list of fastest-growing workplace skills. A lack of opportunity to do so can lead to increased unemployment and inequality.

Wage suppression – Workers with skills that enable them to use AI may see their productivity and wages increase, whereas those who do not may see their wages decrease.

The way forward

We can only develop AI literacy by actively involving our student users. Previously we have argued that institutions/faculties should establish ‘collaborate sandpits’ offering opportunities for discussion and ‘co-creation’. Staff and students need space for this so that they can contribute to debates on what we really mean by ‘responsible use of GenAI’ and develop procedures to ensure responsible use. This is one area where collaborations/networks like GenAI N3 can make a significant contribution.

Sadly, we see too many commentaries which downplay, neglect or ignore GenAI’s issues and limitations. For example, the latest release from OpenAI – Sora 2 – offers text to video and has raised some important challenges to copyright regulations. There is also the continuing problem of hallucinations. Despite recent claims of improved accuracy, GenAI is still susceptible. But how do we identify and guard against untruths which are confidently expressed by the chatbot?

We all need to develop a realistic perspective on GenAI’s likely development. The pace of technical change (and some rather secretive corporate habits) makes this very challenging for individuals, so we need proactive and co-ordinated approaches by course/programme teams. The practical implications of this discussion is that we all need to develop a much broader understanding of GenAI than a simple ‘press this button’ approach.  

Reference

Beckingham, S. and Hartley, P., (2025). In search of ‘Responsible’ Generative AI (GenAI). In: Doolan M.A. and Ritchie, L. eds. Transforming teaching excellence: Future proofing education for all. Leading Global Excellence in Pedagogy, Volume 3. UK: IFNTF Publishing. ISBN 978-1-7393772-2-9 (ebook). https://amzn.eu/d/gs6OV8X

Sue Beckingham

Associate Professor Learning and Teaching
Sheffield Hallam University

Sue Beckingham is an Associate Professor in Learning and Teaching, Sheffield Hallam University. Externally she is a Visiting Professor at Arden University and a Visiting Fellow at Edge Hill University. She is also a National Teaching Fellow, Principal Fellow of the Higher Education Academy and Senior Fellow of the Staff and Educational Developers Association. Her research interests include the use of technology to enhance active learning; and has published and presented this work internationally as an invited keynote speaker. Recent book publications Using Generative AI Effectively on Higher Education: Sustainable and Ethical Practices for Learning Teaching and Assessment.

Peter Hartley

Visiting Professor
Edge Hill University

Peter Hartley is now Higher Education Consultant, and Visiting Professor at Edge Hill University, following previous roles as Professor of Education Development at University of Bradford and Professor of Communication at Sheffield Hallam University. National Teaching Fellow since 2000, he has promoted new technology in education, now focusing on applications/implications of Generative AI, co-editing/contributing to the SEDA/Routledge publication Using Generative AI Effectively in Higher Education (2024; paperback edition 2025). He has also produced several guides and textbooks for students (e.g. co-author of Success in Groupwork 2nd Edn ). Ongoing work includes programme assessment strategies; concept mapping and visual thinking.


Keywords


Professors Share Their Findings and Thoughts on the Use of AI in Research


Three professors (one woman, two men) sit around a large polished conference table in a modern office with bookshelves in the background. They are engaged in a discussion, with open laptops, notebooks, and coffee cups in front of them. Overlaying the scene are glowing holographic data visualizations and graphs, with the words "AI IN ACADEMIC RESEARCH: FINDINGS & PERSPECTIVES" digitally projected in the center, representing the intersection of human intellect and artificial intelligence. Image (and typos) generated by Nano Banana.
Dive into the evolving landscape of academic research as leading professors share their insights and discoveries on integrating AI tools. Explore the benefits, challenges, and future implications of artificial intelligence in scholarly pursuits. Image (and typos) generated by Nano Banana.

Source

The Cavalier Daily

Summary

At the University of Virginia, faculty across disciplines are exploring how artificial intelligence can accelerate and reshape academic research. Associate Professor Hudson Golino compares AI’s transformative potential to the introduction of electricity in universities, noting its growing use in data analysis and conceptual exploration. Economist Anton Korinek, recently named among Time’s 100 most influential in AI, evaluates where AI adds value—from text synthesis and coding to ideation—while cautioning that tasks like mathematical modelling still require human oversight. Professors Mona Sloane and Renee Cummings stress ethical transparency, inclusivity, and the need for disclosure when using AI in research, arguing that equity and critical reflection must remain at the heart of innovation.

Key Points

  • AI is increasingly used at the University of Virginia for research and analysis across disciplines.
  • Golino highlights AI’s role in improving efficiency but calls for deeper institutional understanding.
  • Korinek finds AI most effective for writing, coding, and text synthesis, less so for abstract modelling.
  • Sloane and Cummings advocate transparency, ethical use, and inclusion in AI-assisted research.
  • Faculty urge a balance between efficiency, equity, and accountability in AI’s integration into academia.

Keywords

URL

https://www.cavalierdaily.com/article/2025/10/professors-share-their-findings-and-thoughts-on-the-use-of-ai-in-research

Summary generated by ChatGPT 5