Demis Hassabis Warns AGI Is Years Away Despite AI Advances

An AI model from Google DeepMind shocked athletes from around the world last year when it won the gold medal at the prestigious International Mathematical Olympiad. Why, then, do these same models make mistakes in basic math questions? Such inconsistencies are the hallmark of AI’s “hard intelligence,” according to DeepMind CEO Demis Hassabis.
Today’s AI systems are “very good at some things, but very poor at others,” Hassabis said while speaking at the India AI Impact Summit 2026 in New Delhi, India, today (Feb. 18). This dichotomy must be resolved before artificial general intelligence (AGI), a type of AI that rivals human intelligence, is reached, he added, predicting that the milestone is five to eight years away.
Like the rest of Silicon Valley, DeepMind is racing to be the first developer to unlock the power of advanced AI Acquired by Google more than ten years ago, the lab was founded in 2010 by Hassabis and a small group of researchers with the aim of maintaining “intelligence,” and in the process of addressing some of the biggest questions in the world. “I don’t think we’re there yet,” said Hassabis.
Besides smoothing over the rough edges of AI, other obstacles on the road to AGI include increasing planning capabilities to handle long-term tasks rather than short-term goals. Hassabis is also focused on developing continuous learning—ensuring that systems can adapt and personalize through experience, instead of only absorbing new information before release. For now, he said, the models are “frozen and distributed around the world.”
How the advent of AGI will one day be guaranteed remains an open question throughout the tech industry. For Hassabis, success will come with the emergence of “true intelligence” in AI. This is not limited to art, but extends to science—whether programs can not only solve a hypothetical problem, but also make the right questions and guesses, a feature that Hassabis said separates “great scientists from good scientists.”
The AI chief is excited about the prospect of the models eventually acting as “collaborative scientists” as they grow more autonomous. His focus on scientific research is not surprising given his achievements, which include receiving the Nobel Prize in Chemistry for his work on AlphaFold, an AI system that predicts protein structures, and pioneering Isomorphic Labs, an Alphabet subsidiary that uses AI in drug discovery.
While Hassabis has long touted the scientific promise of AI, his rivals—OpenAI CEO Sam Altman and Anthropic’s Dario Amodei—have placed more weight on the technology’s commercial and labor implications. Their different priorities are reflected in their views on the times of AGI: Altman suggested that these systems may appear by the end of the decade, while Amodei believes that they may arrive sooner.
One point of consensus among leading AI developers, however, is that AGI will bring new risks. Hassabis divides them into two categories: social risks, when bad actors misuse AI, and technical risks, when systems behave in unpredictable and potentially dangerous ways. Preparing for the former requires global dialogue and shared standards. “To reduce some of the risks, we will need international cooperation,” he said.

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window, document,’script’,
‘
fbq(‘init’, ‘618909876214345’);
fbq(‘track’, ‘PageView’);


