The CEO of Google DeepMind has declared that AGI (Artificial General Intelligence) will arrive by 2030. However, achieving this milestone will require one or two more groundbreaking innovations on par with the Transformer or AlphaGo.
At the NeurIPS 2025 conference, Google introduced its most promising successor to the Transformer—the Titans architecture. This new framework seamlessly combines the rapid response of RNNs with the powerful performance of Transformers, excelling in recall and accuracy even within a 2-million-token context. The announcement sent shockwaves across the tech community.
**DeepMind CEO: AGI by 2030** Earlier this year, DeepMind’s CEO, Demis Hassabis, predicted that AGI—capable of matching or surpassing human intelligence—could emerge before 2030. In a recent public discussion, he reiterated this timeline, emphasizing that AGI represents one of the most disruptive moments in human history and is accelerating toward reality.
Hassabis envisions a future where AGI solves humanity’s greatest challenges, such as clean energy, disease eradication, and material science breakthroughs. Yet, he also acknowledges existential risks, including catastrophic misuse of AI and potential extinction-level threats.
**Gemini 3: Untapped Potential** Hassabis highlighted Gemini’s underrated ability to analyze videos and answer abstract questions, demonstrating "meta-cognitive" capabilities. However, he admitted that even the development team has explored less than 10% of Gemini’s potential, with users often uncovering unexpected applications.
**The Path to AGI** According to Hassabis, true AGI requires balanced cognitive abilities—creativity, invention, continual learning, and multi-step reasoning—areas where current LLMs still fall short. While scaling existing systems may contribute to AGI, he believes one or two major breakthroughs are still needed.
**Titans: A Transformer-Level Leap** At NeurIPS 2025, Google unveiled Titans alongside the MIRAS framework, enabling dynamic memory updates and ultra-long-context processing. Titans introduces a neural long-term memory module, actively learning and retaining critical information while discarding irrelevant data.
MIRAS, a theoretical framework, unifies sequence modeling approaches by optimizing memory architecture, attentional bias, retention gates, and memory algorithms. Together, they outperform existing models like Mamba-2 and Transformer++ in language modeling and reasoning tasks, particularly in handling contexts exceeding 2 million tokens.
**Looking Ahead** With Titans addressing memory and continuous learning, and Gemini showcasing early "meta-cognitive" abilities, AGI may be closer than ever. As speculation grows, some predict Titans could power Gemini 4, marking Google’s first major architectural breakthrough since the Transformer.
Comments