Not Even Generative AI’s Developers Fully Understand How Their Models Work


In a futuristic lab or control room, a diverse group of frustrated scientists and developers in lab coats are gathered around a table with laptops, gesturing in confusion. Behind them, a large holographic screen prominently displays "GENERATIVE AI MODEL: UNKNOWABLE COMPLEXITY, INTERNAL LOGIC: BLACK BOX" overlaid on a glowing neural network. Numerous red question marks and "ACCESS DENIED" messages highlight their inability to fully comprehend the AI's workings. Image (and typos) generated by Nano Banana.
Groundbreaking research has unveiled a startling truth: even the developers of generative AI models do not fully comprehend the intricate inner workings of their own creations. This image vividly portrays a team of scientists grappling with the “black box” phenomenon of advanced AI, highlighting the profound challenge of understanding systems whose complexity surpasses human intuition and complete analysis. Image (and typos) generated by Nano Banana.

Source

The Irish Times

Summary

John Thornhill examines the paradox at the heart of the artificial intelligence boom: even the developers of generative AI systems cannot fully explain how their models function. Despite hundreds of billions being invested in the race toward artificial general intelligence (AGI), experts remain divided on what AGI means or whether it is achievable. While industry leaders such as OpenAI and Google DeepMind pursue it with near-religious zeal, critics warn of existential risks and call for restraint. At a Royal Society conference, scholars argued for redirecting research toward tangible, transparent goals and prioritising safety over hype in AI’s relentless expansion.

Key Points

  • Massive investment continues despite no shared understanding of AGI’s meaning or feasibility.
  • Industry figures frame AGI as imminent, while most academics consider it unlikely.
  • Experts highlight safety, transparency, and regulation as neglected priorities.
  • Alan Kay and Shannon Vallor urge shifting focus from “intelligence” to demonstrable utility.
  • Thornhill concludes that humanity’s true “superhuman intelligence” remains science itself.

Keywords

URL

https://www.irishtimes.com/business/2025/10/10/not-even-generative-ais-developers-fully-understand-how-their-models-work/

Summary generated by ChatGPT 5


We must set the rules for AI use in scientific writing and peer review


A group of scientists and academics in lab coats are seated around a conference table in a modern meeting room with a city skyline visible through a large window. Above them, a glowing holographic screen displays "GOVERNING AI IN SCIENTIFIC PUBLICATION," with two main columns: "Scientific Writing" and "Peer Review," each listing specific regulations and ethical considerations for AI use, such as authorship, plagiarism checks, and bias detection. Image (and typos) generated by Nano Banana.
As AI’s role in academic research rapidly expands, establishing clear guidelines for its use in scientific writing and peer review has become an urgent imperative. This image depicts a panel of experts discussing these crucial regulations, emphasizing the need to set ethical frameworks to maintain integrity, transparency, and fairness in the scientific publication process. Image (and typos) generated by Nano Banana.

Source

Times Higher Education

Summary

George Chalhoub argues that as AI becomes more entrenched in research and publication, the academic community urgently needs clear, enforceable guidelines for its use in scientific writing and peer review. He cites evidence of undeclared AI involvement in manuscripts and reviews, hidden prompts, and inflated submission volume. To maintain credibility, journals must require authors and reviewers to disclose AI use, forbid AI as a co-author, and ensure human oversight. Chalhoub frames AI as a tool—not a decision-maker—and insists that accountability, transparency, and common standards must guard against erosion of trust in the scientific record.

Key Points

  • Significant prevalence of AI content: e.g. 13.5 % of 2024 abstracts bore signs of LLM use, with some fields reaching 40 %.
  • Up to ~17 % of peer review sentences may already be generated by AI, per studies of review corpora.
  • Some authors embed hidden prompts (e.g. white-text instructions) to influence AI-powered reviewing tools.
  • Core requirements: disclosure of AI use (tools, versions, roles), human responsibility for verification, no listing of AI as author.
  • Journals should adopt policies involving audits, sanctions for misuse, and shared frameworks via organisations like COPE and STM.

Keywords

URL

https://www.timeshighereducation.com/opinion/we-must-set-rules-ai-use-scientific-writing-and-peer-review

Summary generated by ChatGPT 5