Latest Posts

Artificial intelligence may not be artificial


In a grand, domed library or scientific hall, a large group of scientists and academics in lab coats and formal attire are gathered around a circular table. Above them, a massive, glowing holographic human brain pulsates with light and intricate neural network connections, some extending into the starry dome. Books are seen floating around, symbolizing knowledge. Image (and typos) generated by Nano Banana.
The very nature of artificial intelligence is sparking profound philosophical and scientific debate, with some questioning whether it truly remains “artificial.” This image visually represents the deep contemplation surrounding AI’s origins and capabilities, suggesting that its complexity and emergent properties might hint at a form of intelligence that transcends purely synthetic creation. Image (and typos) generated by Nano Banana.

Source

Harvard Gazette

Summary

Blaise Agüera y Arcas challenges the framing of AI as “artificial” by showing how human brains and artificial systems share computational principles. He argues that brains evolved to compute—processing inputs into predictive models—and that evolution’s growth in complexity was powered not just by mutation and selection, but by cooperation (symbiogenesis). According to Agüera y Arcas, when organisms merge or cooperate, their computational capacity can scale in parallel, a pattern mirrored in how AI systems evolve. He explores the intimate parallels between biology and machine learning as he situates life itself as computational from the start.

Key Points

  • Agüera y Arcas asserts that human brains are literally computational, not just metaphorically so.
  • Evolutionary complexity involved more than selection—cooperation and symbiosis (symbiogenesis) were crucial.
  • Brains and AI both operate through prediction: transforming inputs into outputs via internal models.
  • When systems cooperate (whether biological or synthetic), they achieve parallel computation and greater complexity.
  • The article bridges notions of life, computation, and intelligence—arguing the boundary between “natural” and “artificial” is less clear than often assumed.

Keywords

URL

https://news.harvard.edu/gazette/story/2025/09/artificial-intelligence-may-not-be-artificial/

Summary generated by ChatGPT 5


Generative AI might end up being worthless – and that could be a good thing


A large, glowing, glass orb of generative AI data is shattering and dissipating into a pile of worthless dust. The ground is dry and cracked, and behind the orb, a single, small, green sprout is beginning to grow, symbolizing a return to human creativity. The scene visually represents the idea that the potential 'worthlessness' of AI could be a good thing. Generated by Nano Banana.
While the value of generative AI is a subject of intense debate, some argue that its potential to become ‘worthless’ could be a positive outcome. This image captures the idea that if AI’s allure fades, it could clear the way for a resurgence of human-led creativity, critical thinking, and innovation, ultimately leading to a more meaningful and authentic creative landscape. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

The article argues that the current hype around generative AI (GenAI) may oversell its value: it may eventually prove “worthless” in terms of sustainable returns, which wouldn’t necessarily be bad. Because GenAI is costly to operate and its productivity gains so far modest, many companies could fail to monetise it. Such a collapse might temper hype, reduce wasteful spending, and force society to focus on deeper uses of AI (ethics, reliability, human-centred value) rather than chasing illusions. The author sees a scenario where AI becomes a modest tool rather than the transformative juggernaut many expect.

Key Points

  • GenAI’s operational costs are high and monetisation is uncertain, so many ventures may fail.
  • Overhyping AI risks creating bubble dynamics—lots of investment chasing little real value.
  • A “worthless” AI future may force more careful, grounded development rather than blind expansion.
  • It could shift attention to AI’s limits, ethics, robustness, and human oversight.
  • The collapse of unrealistic expectations might be healthier than unchecked hype.

Keywords

URL

https://www.theconversation.com/generative-ai-might-end-up-being-worthless-and-that-could-be-a-good-thing-266046

Summary generated by ChatGPT 5


AI and human learning


Four professionals (three female, one male) are seated around a modern round table in a library setting. From the center of the table, two intertwining beams of light, one blue (labeled 'AI ASSISTANCE') and one orange (labeled 'HUMAN COGNITION'), rise upwards, filled with icons representing data, brains, books, and scientific symbols. The beams symbolize the synergistic relationship between AI and human learning. Generated by Nano Banana.
Exploring the dynamic interplay between artificial intelligence and human cognition, this image visualises a future where AI acts as a powerful assistant, enhancing and amplifying our natural learning capabilities. It underscores a collaborative relationship, where technology and human intellect intertwine to unlock new depths of understanding and innovation. Image (and typos) generated by Nano Banana.

Source

Dawn

Summary

The article examines how Large Language Models and chatbots are reshaping learning in schools and universities. While AI boosts productivity in basic tasks (summarisation, drafting), it fails to deliver on more complex, multi-step reasoning. The author warns that overreliance on AI risks stunting skills like reading, writing, reasoning, and sustaining depth of thought. Learning is not just about output; it’s a process that tests and builds internal faculties. AI’s inherent hallucination problem adds further risk. The piece argues for institutional safeguards and restrictions on AI use in foundational learning stages.

Key Points

  • AI improves efficiency in simple tasks, but shows no clear gains in complex, multi-stage reasoning.
  • Learning as process is key: reading and writing actively shapes thinking, which AI can’t substitute.
  • Overuse of AI could lead to deskilling: students risk losing internal cognitive capacity.
  • The hallucination risk (AI generating false content) undermines trust in AI outputs.
  • Institutions need to assign boundaries to AI use, particularly where literacy and foundational skills are at stake.

Keywords

URL

https://www.dawn.com/news/1945405

Summary generated by ChatGPT 5


How AI is reshaping education – from teachers to students


A split image depicting the impact of AI on education. On the left, a female teacher stands in front of a holographic 'AI POWERED INSTRUCTION' diagram, addressing a group of students. On the right, students are engaged with 'AI LEARNING PARTNER' interfaces, one wearing a VR headset. A central glowing orb with 'EDUCATION TRANSFORMED: AI' connects both sides, symbolizing the pervasive change AI brings to both teaching and learning. Generated by Nano Banana.
From empowering educators with intelligent instruction tools to providing students with personalised AI learning partners, artificial intelligence is fundamentally reshaping every facet of education. This image illustrates the transformative journey, highlighting how AI is creating new dynamics in classrooms and preparing both teachers and learners for a future redefined by technology. Image (and typos) generated by Nano Banana.

Source

TribLIVE

Summary

In this article, educators in a Pennsylvania school district discuss how AI is being woven into teaching practice and student learning—not by replacing teachers, but amplifying their capacity. AI tools like Magic School help teachers personalise lesson plans, adjust reading levels, reduce repetitive tasks, and monitor student use. A “traffic light” system is used to label assignments by allowed level of AI. New teachers are required to learn AI tools; students begin learning about AI ethically from early grades. The district emphasises that AI should not replace human work but free teachers to focus more on interpersonal and high-order thinking.

Key Points

  • Magic School is used to adapt assignments by subject, grade, and reading level, giving teachers flexibility.
  • Teachers are being trained and supported in AI adoption via workshops, pilot programs, and guided experiments.
  • A colour-coded “traffic light” system distinguishes when AI is allowed (green), allowed for some parts (yellow), or disallowed (red).
  • Starting in early grades, students are taught what AI is and how to use it ethically; higher grades incorporate more active use.
  • The goal: reduce workload on teachers for repetitive tasks so they can devote more energy to student interaction and complex thinking.

Keywords

URL

https://triblive.com/local/regional/heres-how-ai-is-reshaping-education-from-teachers-to-students/

Summary generated by ChatGPT 5


Universities give up using software to detect AI in students’ work


In a university meeting room, a holographic display shows a broken padlock icon, symbolizing the failure or abandonment of AI detection software. Professionals are seated around a conference table, some looking at laptops. Image (and typos) generated by Nano Banana.
Universities are re-evaluating their strategies for academic integrity as many are moving away from relying on software to detect AI-generated content in student assignments. This shift reflects growing complexities and challenges in accurately identifying AI’s role in students’ work. Image (and typos) generated by Nano Banana.

Source

RNZ

Summary

Several New Zealand universities, including Massey, Auckland, and Victoria, have abandoned AI-detection software in student assessments, citing unreliability and inconsistency. Massey University’s move followed a major online exam monitoring failure in 2024, after which academics reported that detection results were often misused to accuse students. Research shows detection tools are easy to fool, leading institutions to shift towards alternative strategies: secured in-person assessments, oral defences, and checking document version histories. Universities stress they are not giving up on integrity but adapting to a changing environment by embedding AI literacy and focusing on preventative measures rather than flawed detection.

Key Points

  • Massey, Auckland, and Victoria universities no longer use AI detection software due to poor reliability.
  • Detection tools were inconsistent, with some staff misusing results to accuse students.
  • Alternative checks include document history tracking, professional judgement, and oral exams.
  • Universities focus on secured assessments (e.g. labs, studios, exams) rather than online monitoring.
  • Shift aims to prioritise AI literacy, ethics, and learning-centred approaches over surveillance.

Keywords

URL

https://www.rnz.co.nz/news/national/574517/universities-give-up-using-software-to-detect-ai-in-students-work

Summary generated by ChatGPT 5