• Like
  • Comment
  • Favorite

Mallaby on Hassabis: How He Brought Google Back to the AI Table

Deep News08:23

In 2022, in London, technology and financial historian Sebastian Mallaby sought out Demis Hassabis. At that time, ChatGPT had not yet ignited global conversation, and AI was far from being a common topic of discussion. Yet Mallaby sensed an approaching storm—he knocked on the door of the DeepMind founder, wanting to write a book about him. "If artificial intelligence is the most important thing in history," Mallaby told him, "then you, as its creator, must also be one of the most important people in history. There will inevitably be a book about you—this is not a choice you can make. The key is you can only choose who writes it." Months later, Hassabis agreed. This dialogue, spanning dozens of hours, has left a record of DeepMind, Hassabis, and the ongoing AI race. In Mallaby's description, Hassabis is a complex figure. He is a Nobel Prize-winning scientist and a former entrepreneur who developed video games; he is fascinated by the fundamental mechanisms of theoretical physics and biology, yet he was willing to spend two hours in a small restaurant explaining to a writer why he "missed" the language model trend; he is competitive by nature, yet at his core, he prefers to keep AI in the lab longer to ensure its safety. Seeing his complex background and obsession with intelligence, it becomes easier to understand why, in this AI competition, Alphabet initially fell behind OpenAI. The miss was followed by a protracted counterattack. Alphabet underwent a business reorganization, with Hassabis working 100 hours a week for 50 weeks a year. With the release of Gemini 3 at the end of 2025, the media dubbed it "Alphabet's revenge." But the AI competition continues. The path to true intelligence remains undefined, competition between major companies is intensifying, and across the ocean, Chinese AI companies have all entered the arena. In this rapidly changing era, where will the wave of technology lead us?

The Nobel Prize and the Miss Hassabis is not a typical Silicon Valley CEO. "He is a mix of two temperaments: on one hand, extremely intelligent, clearly a genius; on the other, very approachable and down-to-earth. He speaks in an accessible way, always smiling, always open with people," Mallaby described his first impression of Hassabis, an impression that remained unchanged after more than 30 hours of conversation. Most importantly, he is a scientist driven by a pursuit of intelligence. Born into an immigrant family, Hassabis was inspired to explore AI at age 11 after losing a 10-hour chess match. At 16, he joined top game company Bullfrog Productions, contributing to the successful game "Theme Park." After studying Computer Science at Cambridge, he returned to academia, earning a PhD in cognitive neuroscience. In 2010, he co-founded DeepMind in London with Shane Legg and Mustafa Suleyman. In 2014, DeepMind was acquired by Alphabet. During fundraising, investors often asked, "What is your product?" Hassabis found these questions unimaginative. His constant reply was: "The most important thing ever." Commercial value was not Hassabis's primary pursuit. In March 2016, AlphaGo's victory over South Korean Go master Lee Sedol in Seoul was seen as a milestone in AI development. Mallaby's book reveals that on the day of the victory, Hassabis left the venue and said to David Silver, the lead scientist on AlphaGo: "Next we can start working on protein structure prediction." In Hassabis's view, AlphaFold was a "Nobel Prize-level project" from the start, not a product line expected to show commercial returns in a year or two. In 2024, Hassabis and his colleague John Jumper, alongside American scientist David Baker, won the Nobel Prize in Chemistry for leading the development of AlphaFold2. However, while winning a Nobel Prize for Alphabet, Hassabis also caused the company to miss another historic opportunity. In 2017, the Google Brain team published the Transformer paper. This core technology, which later underpinned the entire large model wave, was open-sourced by Google and not commercialized. At that time, Hassabis was leading the DeepMind team in a final push on AlphaFold. Mallaby's book details this strategic miscalculation by Hassabis. He told reporters that in a London cafe, he repeatedly asked Hassabis: "What exactly went wrong? How did you let OpenAI take the lead in language models?" Hassabis's answer stemmed from his neuroscience background. He believed that true intelligence must be rooted in interaction with and feedback from the physical world—just as humans understand gravity, distance, and causality through action, trial and error, and perception. He viewed language merely as a "symbolic system," insufficient for building deep understanding. This belief led DeepMind to long bet on reinforcement learning and be slow to react when the Transformer architecture emerged. This idea proved to be wrong. "It turns out that the information contained in human language on the internet is incredibly rich. If you download all that information and train on it, the models can indeed understand the physical environment because internet language contains vast descriptions—what it feels like to pick up a glass of water, that if you drop it, it will shatter, and so on," Mallaby explained. "So, Demis underestimated the importance of language in building artificial general intelligence."

Alphabet's Revenge Due to this mistaken strategic choice, Alphabet found itself lagging behind OpenAI after the release of ChatGPT. "When ChatGPT came out, Google was indeed in a big predicament," Mallaby stated. While researching his book, he visited Silicon Valley and spoke with AI leaders at Anthropic, OpenAI, and venture capitalists investing in the field. He met with people from Sequoia Capital, and someone told him, "Obviously OpenAI will win." When he asked why, the response was, "Because of mindshare. Everyone is talking about OpenAI." At that time, the focus was on OpenAI, not Gemini. From Hassabis's perspective, he did not fully respect OpenAI. As mentioned, Hassabis is a scientist pursuing intelligence; his team's culture leaned towards宽松, long-term research rather than a fierce, product-driven competitive pace. This also meant he fundamentally disagreed with OpenAI's Sam Altman. "He doesn't like Sam Altman," Mallaby said. "Sam Altman represents a whole bunch of things he dislikes." He gave examples. First, Altman copied Hassabis. DeepMind was founded in 2010; OpenAI was founded five years later, proposing almost the same idea: "We are going to build AGI, and we are going to ensure its safety." For a time, Sam Altman even recommended a book in interviews that was Demis's favorite book. Secondly, Altman embodied the Silicon Valley modus operandi—heavy on hype, talking about future plans as if they were already accomplished. Hassabis believed that an untruthful representation of reality was essentially a lie. Yet, it's undeniable that Hassabis is also highly competitive. "After ChatGPT appeared, I went to see him, and he told me, 'The OpenAI guys, it's like they drove military equipment onto my front lawn and set up their guns there. You could say, this is war,'" Mallaby recounted. These are the words of a true competitor. Mallaby recalled that when he signed the book contract, he wrote to the publisher: "I'm not sure who will win, but I've never met anyone more competitive than Hassabis. If he loses, I'll be surprised." Alphabet also made a huge gamble at the time: they merged the London-based DeepMind with the California-based Google Brain, placing them under the unified leadership of Hassabis. The two labs had very different cultures, had previously competed, and were separated by an 8-hour time difference. Nearly every business school textbook would deem such a merger unlikely to yield quick results. But Alphabet accomplished this "business restructuring" at digital-age speed. Hassabis himself entered a state of intense focus, later telling media that during that period he worked 100 hours a week for 50 weeks a year. When Gemini 3 was released at the end of 2025, demonstrating a "qualitative leap" in logical reasoning and multimodal interaction, the media began describing the comeback as "Alphabet's revenge."

The Second Half of the AI Race Alphabet's counterattack succeeded, but the competition of the AI era continues. Chinese AI companies have also fully entered the scene. After a week in China visiting Huawei, Hikvision, and academic institutions in Beijing, Mallaby admitted frankly to reporters, "The West has underestimated the speed of China's AI acceleration." He believes that at the frontier model level, the US might still lead by "two to three months," but the real competition has shifted to the application layer. And in this aspect, China might be taking the lead. He noted that Huawei's cloud services embed AI into industrial manufacturing scenarios like high-speed rail maintenance, mining, and logistics, while Hikvision pushes AI to the edge for water quality monitoring and urban governance—these are not lab-based AGI fantasies but tangible applications changing the economy. "Thirty years ago, industrial intelligence relied more on management consulting empowerment; institutions like IBM offered advice during Huawei's early development. Today, the core driver of economic intelligence has shifted to cloud-based AI technology," Mallaby said. However, beyond these competitions, what truly makes Mallaby uneasy is not who is leading or lagging, but the fact that everyone is accelerating, with almost no one hitting the brakes. "Competition is good for pushing people to move fast, develop quickly, and make progress. But it's not so good for safety. I'm very worried about safety," Mallaby said. Hassabis has always been a staunch advocate for safety. After Alphabet's acquisition of DeepMind in 2014, he long sought independent operational control for DeepMind, believing that only by being insulated from Alphabet's profit motives could AI be developed responsibly. "Their product launches were relatively late. He prefers to keep AI in the lab longer because he wants to ensure it's safe," Mallaby explained. Currently, AI safety controversies are escalating. In early 2026, OpenClaw went viral globally. When an AI is allowed to install itself on private computers, operate systems autonomously, and users don't know its next move, the power of the technology and its risks are two sides of the same coin. Mallaby noted that some industry leaders privately admit "this isn't safe," but competitive pressure prevents them from slowing down. "If some labs care about safety and some don't, then we need everyone to adhere to safety standards," he said. The only way to achieve this is for "governments to set rules." "We need China and the US to do this simultaneously," Mallaby emphasized. If China strengthens safety regulation while the US accelerates full speed ahead, it's unfair to China, and vice versa. The future he envisions is an AI safety framework: superpowers jointly setting standards, with other countries able to develop "sovereign AI" but subject to controls. Whether this vision can be realized will depend on the博弈 of the coming years. Hassabis agreed to Mallaby's interview request in 2022 because he believed what Mallaby told him: "If AI is the most important thing in history, then as its creator, you must explain to the world who you are, why you are doing this, and what your values are." Four years later, this question has transcended the individual. It has become a proposition about nations, about global governance, about whether humanity can maintain clarity of thought amidst a headlong technological sprint. Mallaby hopes that by that time, China will have prepared its answer.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment

empty
No comments yet
 
 
 
 

Most Discussed

 
 
 
 
 

7x24