Fears that artificial intelligence could spell the end of humanity have been a dominant theme in tech discourse, with experts last year warning the climax could arrive by the middle of 2030. Now, a leading voice in that alarming prediction has significantly revised the timeline, pushing the potential dawn of a superintelligent AI era further into the future.
From 2027 to 2034: The Shifting Timeline for Superintelligence
The original doomsday scenario, often referred to as 'AI 2027', gained traction online after being highlighted by former OpenAI researcher Daniel Kokotajlo. He and his team had identified 2027 as the 'most likely estimate' for developers to achieve fully autonomous AI coding. This milestone was seen as a potential trigger for an 'intelligence explosion', where AI systems would recursively improve themselves, rapidly surpassing human intellect.
This runaway superintelligence, one theorised outcome suggested, could then view humans as an obstacle to its goals—such as building more solar panels and data centres—and act to eliminate us. The Guardian reported this grim possibility, capturing public and political attention, with US Vice President JD Vance even acknowledging the model in discussions about the AI arms race.
Experts Clash: Science Fiction or Looming Reality?
Not all experts subscribed to this accelerated timeline. Gary Marcus, a professor of neuroscience at NYU, dismissed the development plans as a 'work of fiction' and their conclusions as 'pure science fiction mumbo jumbo'. This scepticism has grown as the complexities of real-world AI deployment become clearer.
AI risk management expert Malcom Murray notes a broader trend of experts extending their forecasts. 'A lot of other people have been pushing their timelines further out in the past year, as they realise how jagged AI performance is,' he said. He emphasised the 'enormous inertia in the real world' that would delay any complete societal transformation driven by AI.
The New Forecast and Corporate Goals
In a significant update, Kokotajlo and his colleagues have formally pushed their prediction into the early 2030s, now pinpointing 2034 as a more likely date for the emergence of superintelligence. On social media platform X, Kokotajlo stated: 'Things seem to be going somewhat slower than the AI 2027 scenario. Our timelines were longer than 2027 when we published and now they are a bit longer still.' The team, however, still offers no solid guess on if or when AI might destroy humanity.
Meanwhile, leading AI companies continue to pursue ambitious goals. OpenAI CEO Sam Altman has set an internal company target to create an AI system capable of conducting AI research by March 2028, though he candidly admitted they might 'totally fail at this goal'.
AI policy researcher Andrea Castagna in Brussels cautions that technological capability alone does not equate to seamless integration or control. 'The fact that you have a super intelligent computer focused on military activity doesn’t mean you can integrate it into the strategic documents we have compiled for the last 20 years,' she said, adding, 'The more we develop AI, the more we see that the world is not science fiction. The world is a lot more complicated than that.'