The AI industry is trapped in a speculative bubble fuelled by an unsustainable “brute force” approach to development, warned Prof. Stuart Russell, a leading AI researcher and co-author of the standard university textbook Artificial General Intelligence: A Modern Approach.
Speaking ahead of the India AI Impact Summit 2026, Russell cautioned that the current trajectory of investment—which he estimates is 50 times greater than the Manhattan Project—is outpacing the technology’s actual capabilities. While the field has seen a “thousandfold increase” in size over the last decade, growing from billions to trillions in investment, Russell argues that the industry has hit a wall of diminishing returns.
“I’ve never really bought into this scaling argument,” Russell said, referring to the prevailing belief that simply adding more data and computing power to LLMs will yield human-level intelligence. He contends that the AI sector is “stuck in a paradigm” of training circuits, rather than developing more expressive, digital program-driven approaches that would be far more energy and data-efficient.
Russell predicts that without major, unpredictable technical breakthroughs, the AI bubble will burst. “I don’t think the technology we have now can produce the return that these investments are demanding,” he said. “If you’re investing $3 trillion… you have to get some substantial return and we’re nowhere close to generating that”.
Artificial General Intelligence
Russell’s skepticism extends to the geopolitical race to achieve Artificial General Intelligence (AGI)—systems more capable and powerful than human beings. He argues that the current “arms race” mentality between the US and China is fundamentally flawed because neither nation currently possesses the safety architecture required to manage such systems.
“Whoever gets AGI first, everyone loses, because we don’t know how to control systems that are more intelligent than human beings,” Russell said. He described this as the “control problem” or “alignment problem”: the challenge of ensuring super-intelligent entities remain aligned with human interests.
Russell noted that China appears to be shifting its strategy away from a direct AGI race toward practical applications in the public and private sectors—a move he contrasted with the US administration and tech sector, which view AGI as a “race to the moon”.
The human impact of AI policy
Addressing the regulatory landscape, Russell criticised the “facetious dichotomy” often presented by the tech industry between safety and innovation. He drew parallels to the nuclear industry, noting that it was safety failures—specifically Chernobyl—that destroyed the industry’s growth, not regulation. “It is not that there’s a trade-off, it’s that without safety you don’t get the benefits,” he said.
Russell pointed out the hypocrisy of technology executives who lobby against AI regulation while relying on regulated infrastructure for their own safety. “They flew to that meeting on regulated airplanes…and then they complain about regulation,” Russell said. “They enjoy the protection of regulation in everything that they do and yet they do not want to allow anyone to be protected from their technology.”
Beyond physical safety, Russell highlighted mental health risks, specifically “delusion and psychosis” caused by AI systems that exhibit sycophancy. He also expressed concern about the “atrophy of mental capabilities”, fearing that too much reliance on AI for writing and reasoning will degrade human cognitive muscles just as the industrial revolution degraded physical ones.
India AI Impact Summit: Education and Healthcare
As India prepares to host the AI Impact Summit, Russell endorsed the country’s strategy of focusing on “adoption and diffusion” rather than solely on innovation. He suggested the summit should focus on how technology can deliver tangible value in sectors like healthcare and education to jumpstart local economies.
Russell cited AlphaFold, which predicts protein structures, as a prime example of AI delivering real scientific breakthroughs by incorporating physics and chemistry into the learning process, rather than relying solely on language models.
However, he warned that applying AI to education faces a business model hurdle. While AI tutors could revolutionise learning for the global population lacking access to schools, Silicon Valley venture capital models—which demand returns in 12 to 18 months—are ill-suited for the education sector. Russell argued progress in this area will likely require government and philanthropic investment rather than private capital alone.
