A leading expert has issued a stark warning that artificial intelligence (AI) could become the "last technology humanity ever builds", as a major research project revises its so-called "doom timeline" for when machines might surpass human control.
The Revised 'Doom Timeline'
The AI 2027 research project, originally published in April 2025 by the group AI Futures, painted a concerning future scenario. It predicted that by 2027, AI could achieve "fully autonomous coding", allowing it to recursively improve itself and potentially develop a "superintelligence" capable of outperforming humans in most cognitive tasks. One extreme outcome suggested this could lead to human obsolescence or even extinction early in the next decade.
However, in a significant update published at the end of December, AI Futures has pushed this timeline back. Project leader Daniel Kokotajlo stated on social media platform X that "things seem to be going somewhat slower". The group's revised model now predicts that AI will develop the ability to code autonomously in the 2030s, not 2027, with artificial superintelligence potentially emerging around 2034.
Expert Reactions and Ongoing Risks
The initial 2027 prediction sparked intense debate. Some, like Emeritus Professor Gary Marcus of New York University, dismissed its narrative as "pure science fiction mumbo jumbo". However, other experts acknowledge the critical conversation it ignited about long-term AI safety.
Dr Fazl Barez, a senior research fellow at the University of Oxford specialising in AI safety, told The Independent that while he disagrees with the specific timeline, the core concern is valid. "Among experts, nobody really disagrees that if we don't figure out alignment... it could potentially be the last technology humanity ever builds," he stated.
The Race Between Capability and Safety
Dr Barez, who leads research within the AI Governance Initiative, highlighted a dangerous imbalance. He described AI capability development as moving at the "speed of light", far outpacing progress on safety measures and mitigation of societal risks. "We haven't really figured out how to prevent either the bad consequences that come with it or the consequences that perpetuate and increase existing issues in society," he warned.
The greater risk, from his perspective, may be a "gradual disempowerment of humanity" as reliance on AI deepens. The fundamental challenge is ensuring this powerful technology "is always there to serve our purposes and goals... and not one that replaces us." The revised timeline offers a crucial, if uncertain, window to address these existential questions.