AI and Assessment Training Initiative Empowers Lecturers


A group of diverse lecturers and educators in a modern meeting room, actively participating in a training session. A male presenter stands in front of a large, interactive screen displaying "AI-POWERED ASSESSMENT STRATEGIES" and various glowing data visualizations, charts, and a central brain icon representing AI. Participants around a large table are engaged with laptops and tablets, with some looking towards the screen and others discussing amongst themselves. The overall atmosphere is collaborative and focused on learning new technologies.  Image (and typos) generated by Nano Banana.
Empowering educators for the future: A new AI and assessment training initiative is equipping lecturers with the knowledge and tools to effectively integrate artificial intelligence into their evaluation strategies, enhancing teaching and learning outcomes. Image (and typos) generated by Nano Banana.

Source

North-West University News (South Africa)

Summary

North-West University (NWU) has launched a large-scale professional development initiative to promote responsible use of artificial intelligence in teaching, learning, and assessment. The AI and Assessment course, supported by the Senior Deputy Vice-Chancellor for Teaching and Learning, the AI Hub, and the Centre for Teaching and Learning, awarded R500 Takealot vouchers to the first 800 lecturers who completed all eleven modules. Participants earned fifteen digital badges by achieving over 80 per cent in assessments and submitting a portfolio of evidence. The initiative underscores NWU’s commitment to digital transformation and capacity building. Lecturers praised the programme for strengthening their understanding of ethical and effective AI integration in higher education.

Key Points

  • 800 NWU lecturers were incentivised to complete the AI and Assessment training course.
  • The programme awarded fifteen digital badges for verified completion and assessment success.
  • Leadership highlighted AI’s transformative role in teaching and learning innovation.
  • Participants reported improved confidence in using AI tools responsibly and ethically.
  • The initiative reinforces NWU’s institutional focus on digital capability and staff development.

Keywords

URL

https://news.nwu.ac.za/ai-and-assessment-training-initiative-empowers-lecturers

Summary generated by ChatGPT 5


Dartmouth Builds Its Own AI Chatbot for Student Well-Being


A close-up of a digital display screen showing a friendly AI chatbot interface titled "DARTMOUTH COMPANION." The chatbot has an avatar of a friendly character wearing a green scarf with the Dartmouth shield. Text bubbles read "Hi there! I'm here to support you. How you feeling today?" with clickable options like "Stress," "Social Life," and "Academics." In the blurred background, several college students are visible in a modern, comfortable common area, working on laptops and chatting, suggesting a campus environment. The Dartmouth logo (pine tree) is visible at the bottom of the screen. Image (and typos) generated by Nano Banana.
Dartmouth College takes a proactive step in student support by developing its own AI chatbot, “Dartmouth Companion.” This innovative tool aims to provide accessible assistance and resources for student well-being, addressing concerns from academics to social life. Image (and typos) generated by Nano Banana.

Source

Inside Higher Ed

Summary

Dartmouth College is developing Evergreen, a student-designed AI chatbot aimed at improving mental health and well-being on campus. Led by Professor Nicholas Jacobson, the project involves more than 130 undergraduates contributing research, dialogue, and content creation to make the chatbot conversational and evidence-based. Evergreen offers tailored guidance on health topics such as exercise, sleep, and time management, using opt-in data from wearables and campus systems. Unlike third-party wellness apps, it is student-built, privacy-focused, and designed to intervene early when students show signs of distress. A trial launch is planned for autumn 2026, with potential for wider adoption across universities.

Key Points

  • Evergreen is a Dartmouth-built AI chatbot designed to support student well-being.
  • Over 130 undergraduate researchers are developing its conversational features.
  • The app personalises feedback using student-approved data such as sleep and activity.
  • Safety features alert a self-identified support team if a user is in crisis.
  • The first controlled trial is set for 2026, with plans to share the model with other colleges.

Keywords

URL

https://www.insidehighered.com/news/student-success/health-wellness/2025/10/14/dartmouth-builds-its-own-ai-chatbot-student-well

Summary generated by ChatGPT 5


AI Systems and Humans ‘See’ the World Differently – and That’s Why AI Images Look So Garish


A split image of a rolling green landscape under a sky with clouds. The left side, labeled "HUMAN VISION," shows a natural, soft-lit scene with realistic colors. The right side, labeled "AI PERCEPTION," depicts the exact same landscape but with intensely saturated, almost neon, and unrealistic colors, particularly in the foreground grass which glows with a rainbow of hues. A stark, jagged white line divides the two halves, and subtle digital code overlays the AI side. The central text reads "HOW AI SEES THE WORLD." Image (and typos) generated by Nano Banana.
Ever wonder why AI-generated images sometimes have a unique, almost unnatural vibrancy? This visual contrast highlights the fundamental differences in how AI systems and human perception process and interpret visual information, explaining the often “garish” aesthetic of AI art. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

T. J. Thomson explores how artificial intelligence perceives the visual world in ways that diverge sharply from human vision. His study, published in Visual Communication, compares AI-generated images with human-created illustrations and photographs to reveal how algorithms process and reproduce visual information. Unlike humans, who interpret colour, depth, and cultural context, AI relies on mathematical patterns, metadata, and comparisons across large image datasets. As a result, AI-generated visuals tend to be boxy, oversaturated, and generic – reflecting biases from stock photography and limited training diversity. Thomson argues that understanding these differences can help creators choose when to rely on AI for efficiency and when human vision is needed for authenticity and emotional impact.

Key Points

  • AI perceives visuals through data patterns and metadata, not sensory interpretation.
  • AI-generated images ignore cultural and contextual cues and default to photorealism.
  • Colours and shapes in AI images are often exaggerated or artificial due to training biases.
  • Human-made images evoke authenticity and emotional engagement that AI versions lack.
  • Knowing when to use AI or human vision is key to effective visual communication.

Keywords

URL

https://theconversation.com/ai-systems-and-humans-see-the-world-differently-and-thats-why-ai-images-look-so-garish-260178

Summary generated by ChatGPT 5