
Source
The Irish Times
Summary
John Thornhill examines the paradox at the heart of the artificial intelligence boom: even the developers of generative AI systems cannot fully explain how their models function. Despite hundreds of billions being invested in the race toward artificial general intelligence (AGI), experts remain divided on what AGI means or whether it is achievable. While industry leaders such as OpenAI and Google DeepMind pursue it with near-religious zeal, critics warn of existential risks and call for restraint. At a Royal Society conference, scholars argued for redirecting research toward tangible, transparent goals and prioritising safety over hype in AI’s relentless expansion.
Key Points
- Massive investment continues despite no shared understanding of AGI’s meaning or feasibility.
- Industry figures frame AGI as imminent, while most academics consider it unlikely.
- Experts highlight safety, transparency, and regulation as neglected priorities.
- Alan Kay and Shannon Vallor urge shifting focus from “intelligence” to demonstrable utility.
- Thornhill concludes that humanity’s true “superhuman intelligence” remains science itself.
Keywords
URL
Summary generated by ChatGPT 5