OUR PARTNERS
DeepMind Co-founder Concerned about AI Hype and Investment Surge
30 June, 2024
Amidst the crescendo of excitement for artificial intelligence, industry gurus caution against the fervor potentially masking AI’s genuine strides in progress. The massive influx of funds into AI startups draws parallels with speculative investment bubbles like those in cryptocurrency, raising flags about discerning the real potential from exaggerated claims.
Financial Times recently caught up with DeepMind’s co-founder, Demis Hassabis, who reflected on the billions allocated to generative AI companies. While he asserts that these investments have, regrettably, brought along hyperbole akin to that observed in other sectors such as cryptocurrency, Hassabis believes the underlying science is extraordinary. “In one respect, AI is not hyped enough; yet, in other aspects, it risks being overhyped, leading to a focus on things that are not yet reality,” he states.
The AI investment landscape witnessed a surge, especially after the November 2022 unveiling of OpenAI’s conversational AI, ChatGPT. Ventures rushed to secure a stake in generative AI, with venture capital groups pouring a staggering $42.5 billion into roughly 2,500 AI startup equity rounds, as per CB Insights. This eagerness is not confined to the private sector; investors in public markets have rallied around leading tech companies, propelling the likes of Microsoft, Alphabet, and Nvidia, and contributing to the robust first-quarter performance of global stock markets in years.
Despite the influx of capital and interest there are rumblings of regulatory skepticism. Gary Gensler, chair of the US Securities and Exchange Commission, has warned against ‘AI washing’—a play on the term ‘greenwashing,’ which refers to misleading claims made by companies about their environmental initiatives. With AI, the precaution is against unfounded claims about a company’s AI capabilities.
In conversation with the Financial Times, Hassabis—newly knighted for his scientific endeavors—reiterated his belief in AI as one of humanity’s most transformative inventions. He said, “We are merely at the initial stage of unlocking AI’s potential. The coming decade might witness a bloom in scientific discovery akin to a new Renaissance.”
DeepMind’s 2021 release of the AlphaFold model stands as a testament to AI’s formidable potential, helping predict structures of over 200 million proteins and becoming an invaluable tool for more than a million biologists globally. Further expanding AI’s horizon, DeepMind is dedicated to pioneering AI tools in multiple domains, including drug discovery, material science, and energy—efforts Hassabis sees as using AI as the ultimate scientific instrument.
However, the path to the creation of artificial general intelligence (AGI)—a system with comprehensive cognitive abilities on par with humans—remains a topic of intense debate. Hassabis acknowledges that a few key breakthroughs are requisite for AGI, but he posits a 50 percent chance of its realization within the next decade, a timeline that has remained stable since DeepMind’s inception.
DeepMind’s approach to AGI is grounded in the scientific method, in contrast to the ‘hacker approach’ popularized by Silicon Valley. The gravitas of AGI’s potential impact demands methodical, research-based development.
Importantly, DeepMind has actively contributed to the international conversation on AI safety, highlighted by the AI Safety Summit held at Bletchley Park. Hassabis sees these platforms, along with the establishment of AI safety institutes in the UK and US, as significant strides towards responsible AI development. Yet, he urges quicker progress due to the exponential advancement of AI technologies.
Recently, DeepMind took on the challenge of AI reliability through a paper on a new methodology named SAFE, targeting factual inaccuracies produced by powerful AI models such as the AI text generator GPT and Google’s Gemini. These models can sometimes generate ‘hallucinations’ or false information, an issue SAFE addresses by cross-referencing responses with databases like Google Search or Scholar. This parallels the meticulous approach taken by AlphaGo, which reviews potential moves before settling on the most strategic option. SAFE’s method achieved a harmony with crowdsourced human annotators at a substantial reduction in cost, marking a promising advancement for the practical application of large language models.
At the analytic and editorial edge of this burgeoning domain, AI enthusiasts and industry participants keep watch on advancements through channels like the latest ai news & ai tools. Such insights are invaluable, ensuring a grounded perspective amidst an environment prone to hyperbole. The road ahead for AI promises to blend scientific achievements with enterprise, but vigilance remains key to distinguishing true innovation from the cacophony of the hype.