AI and human learning


Four professionals (three female, one male) are seated around a modern round table in a library setting. From the center of the table, two intertwining beams of light, one blue (labeled 'AI ASSISTANCE') and one orange (labeled 'HUMAN COGNITION'), rise upwards, filled with icons representing data, brains, books, and scientific symbols. The beams symbolize the synergistic relationship between AI and human learning. Generated by Nano Banana.
Exploring the dynamic interplay between artificial intelligence and human cognition, this image visualises a future where AI acts as a powerful assistant, enhancing and amplifying our natural learning capabilities. It underscores a collaborative relationship, where technology and human intellect intertwine to unlock new depths of understanding and innovation. Image (and typos) generated by Nano Banana.

Source

Dawn

Summary

The article examines how Large Language Models and chatbots are reshaping learning in schools and universities. While AI boosts productivity in basic tasks (summarisation, drafting), it fails to deliver on more complex, multi-step reasoning. The author warns that overreliance on AI risks stunting skills like reading, writing, reasoning, and sustaining depth of thought. Learning is not just about output; it’s a process that tests and builds internal faculties. AI’s inherent hallucination problem adds further risk. The piece argues for institutional safeguards and restrictions on AI use in foundational learning stages.

Key Points

  • AI improves efficiency in simple tasks, but shows no clear gains in complex, multi-stage reasoning.
  • Learning as process is key: reading and writing actively shapes thinking, which AI can’t substitute.
  • Overuse of AI could lead to deskilling: students risk losing internal cognitive capacity.
  • The hallucination risk (AI generating false content) undermines trust in AI outputs.
  • Institutions need to assign boundaries to AI use, particularly where literacy and foundational skills are at stake.

Keywords

URL

https://www.dawn.com/news/1945405

Summary generated by ChatGPT 5


Generative AI might end up being worthless – and that could be a good thing


A large, glowing, glass orb of generative AI data is shattering and dissipating into a pile of worthless dust. The ground is dry and cracked, and behind the orb, a single, small, green sprout is beginning to grow, symbolizing a return to human creativity. The scene visually represents the idea that the potential 'worthlessness' of AI could be a good thing. Generated by Nano Banana.
While the value of generative AI is a subject of intense debate, some argue that its potential to become ‘worthless’ could be a positive outcome. This image captures the idea that if AI’s allure fades, it could clear the way for a resurgence of human-led creativity, critical thinking, and innovation, ultimately leading to a more meaningful and authentic creative landscape. Image (and typos) generated by Nano Banana.

Source

The Conversation

Summary

The article argues that the current hype around generative AI (GenAI) may oversell its value: it may eventually prove “worthless” in terms of sustainable returns, which wouldn’t necessarily be bad. Because GenAI is costly to operate and its productivity gains so far modest, many companies could fail to monetise it. Such a collapse might temper hype, reduce wasteful spending, and force society to focus on deeper uses of AI (ethics, reliability, human-centred value) rather than chasing illusions. The author sees a scenario where AI becomes a modest tool rather than the transformative juggernaut many expect.

Key Points

  • GenAI’s operational costs are high and monetisation is uncertain, so many ventures may fail.
  • Overhyping AI risks creating bubble dynamics—lots of investment chasing little real value.
  • A “worthless” AI future may force more careful, grounded development rather than blind expansion.
  • It could shift attention to AI’s limits, ethics, robustness, and human oversight.
  • The collapse of unrealistic expectations might be healthier than unchecked hype.

Keywords

URL

https://www.theconversation.com/generative-ai-might-end-up-being-worthless-and-that-could-be-a-good-thing-266046

Summary generated by ChatGPT 5


Artificial intelligence may not be artificial


In a grand, domed library or scientific hall, a large group of scientists and academics in lab coats and formal attire are gathered around a circular table. Above them, a massive, glowing holographic human brain pulsates with light and intricate neural network connections, some extending into the starry dome. Books are seen floating around, symbolizing knowledge. Image (and typos) generated by Nano Banana.
The very nature of artificial intelligence is sparking profound philosophical and scientific debate, with some questioning whether it truly remains “artificial.” This image visually represents the deep contemplation surrounding AI’s origins and capabilities, suggesting that its complexity and emergent properties might hint at a form of intelligence that transcends purely synthetic creation. Image (and typos) generated by Nano Banana.

Source

Harvard Gazette

Summary

Blaise Agüera y Arcas challenges the framing of AI as “artificial” by showing how human brains and artificial systems share computational principles. He argues that brains evolved to compute—processing inputs into predictive models—and that evolution’s growth in complexity was powered not just by mutation and selection, but by cooperation (symbiogenesis). According to Agüera y Arcas, when organisms merge or cooperate, their computational capacity can scale in parallel, a pattern mirrored in how AI systems evolve. He explores the intimate parallels between biology and machine learning as he situates life itself as computational from the start.

Key Points

  • Agüera y Arcas asserts that human brains are literally computational, not just metaphorically so.
  • Evolutionary complexity involved more than selection—cooperation and symbiosis (symbiogenesis) were crucial.
  • Brains and AI both operate through prediction: transforming inputs into outputs via internal models.
  • When systems cooperate (whether biological or synthetic), they achieve parallel computation and greater complexity.
  • The article bridges notions of life, computation, and intelligence—arguing the boundary between “natural” and “artificial” is less clear than often assumed.

Keywords

URL

https://news.harvard.edu/gazette/story/2025/09/artificial-intelligence-may-not-be-artificial/

Summary generated by ChatGPT 5


We must set the rules for AI use in scientific writing and peer review


A group of scientists and academics in lab coats are seated around a conference table in a modern meeting room with a city skyline visible through a large window. Above them, a glowing holographic screen displays "GOVERNING AI IN SCIENTIFIC PUBLICATION," with two main columns: "Scientific Writing" and "Peer Review," each listing specific regulations and ethical considerations for AI use, such as authorship, plagiarism checks, and bias detection. Image (and typos) generated by Nano Banana.
As AI’s role in academic research rapidly expands, establishing clear guidelines for its use in scientific writing and peer review has become an urgent imperative. This image depicts a panel of experts discussing these crucial regulations, emphasizing the need to set ethical frameworks to maintain integrity, transparency, and fairness in the scientific publication process. Image (and typos) generated by Nano Banana.

Source

Times Higher Education

Summary

George Chalhoub argues that as AI becomes more entrenched in research and publication, the academic community urgently needs clear, enforceable guidelines for its use in scientific writing and peer review. He cites evidence of undeclared AI involvement in manuscripts and reviews, hidden prompts, and inflated submission volume. To maintain credibility, journals must require authors and reviewers to disclose AI use, forbid AI as a co-author, and ensure human oversight. Chalhoub frames AI as a tool—not a decision-maker—and insists that accountability, transparency, and common standards must guard against erosion of trust in the scientific record.

Key Points

  • Significant prevalence of AI content: e.g. 13.5 % of 2024 abstracts bore signs of LLM use, with some fields reaching 40 %.
  • Up to ~17 % of peer review sentences may already be generated by AI, per studies of review corpora.
  • Some authors embed hidden prompts (e.g. white-text instructions) to influence AI-powered reviewing tools.
  • Core requirements: disclosure of AI use (tools, versions, roles), human responsibility for verification, no listing of AI as author.
  • Journals should adopt policies involving audits, sanctions for misuse, and shared frameworks via organisations like COPE and STM.

Keywords

URL

https://www.timeshighereducation.com/opinion/we-must-set-rules-ai-use-scientific-writing-and-peer-review

Summary generated by ChatGPT 5


Black Eyed Peas’ will.i.am to teach AI class at ASU


In a futuristic, dark room with glowing blue and red neon lights, a large holographic screen displays an online AI class titled "THE AGENTIC SELF." The main panel shows a charismatic male professor speaking, surrounded by various AI-related data, neural networks, and a stylized human head representing an AI. Below, a grid of diverse student participants is visible in a virtual meeting. The Arizona State University (ASU) logo is also displayed. Image (and typos) generated by Nano Banana.
This image envisions an engaging online AI class at Arizona State University, titled “The Agentic Self,” exploring the intricacies of autonomous AI. It showcases a dynamic virtual classroom where students connect from various locations, delving into cutting-edge concepts of AI’s self-governing capabilities and its implications for the future. Image (and typos) generated by Nano Banana.

Source

Phoenix Business Journal

Summary

Arizona State University announced that Black Eyed Peas performer and entrepreneur will.i.am will join the faculty as a professor of practice to teach a course on artificial intelligence. Starting spring 2026, he will lead “The Agentic Self”, a 15-week class exploring how AI can serve as a creative and educational partner. The class will run through ASU’s GAME School and connect to will.i.am’s FYI.AI platform. University officials emphasise the collaboration as part of ASU’s mission to innovate teaching and help students gain fluency in emerging technologies.

Key Points

  • will.i.am joins ASU as professor of practice to teach AI.
  • Course title: “The Agentic Self”, scheduled for spring 2026.
  • Students will explore AI as tool, collaborator, and creative partner.
  • Class is hosted by ASU’s GAME School and linked to FYI.AI platform.
  • Move underscores ASU’s strategy of blending tech, industry expertise, and higher education innovation.

Keywords

URL

https://www.bizjournals.com/phoenix/news/2025/09/29/black-eyed-peas-performer-to-teach-asu-class-on-ai.html

Summary generated by ChatGPT 5