In the technology industry, Moore's Law once served as a guiding beacon, propelling the semiconductor sector forward at a rapid pace. However, in the current era of explosive demand for AI hardware, Elon Musk is determined, with sky-piercing ambition, to challenge and unseat giants like Nvidia and AMD. Recently, Musk unveiled his comprehensive computing power "full set": from the completed design of AI5, to the training-inference integrated AI6, and further to AI7 targeting space-based computing, even the Dojo supercomputer project, once thought by outsiders to be "dead," is set to be revived. While launching these major moves against the industry, Musk simultaneously initiated "direct hiring" of AI chip engineers. His goal is to release a new chip every nine months, upending the current market's iteration speed, and he is also preparing to build his own wafer fabrication plant to control the silicon-based lifeline from the source. Behind Musk's "Iron Man" ecosystem blueprint, the intense demand for chip production capacity across various business lines makes the delivery capability of the AI chip supply chain and the speed of technological iteration key factors constraining its expansion. Piecing these signals together, a vast technological ecosystem encompassing autonomous driving, robotics, satellite communications, and brain-computer interfaces is coming into view. The man who once disrupted the automotive and rocket industries is now preparing to redraw the map of the entire AGI era.
The Great Unification of Computing Power On January 19th, Musk made a major announcement, revealing that the design of the latest self-developed chip, AI5, is largely complete, aiming to integrate smart cars and robots. Development of the next-generation "AI6" chip has also begun, positioned as training-inference integrated, usable for both robots and data centers. Musk also stated that Tesla will subsequently launch AI7, AI8, AI9, and other chips, targeting a design cycle completed within nine months. "We expect that ultimately our chip production will exceed the sum of all other AI chips," Musk stated with a hint of boastfulness, "I'm not joking." To understand Musk's anxiety and ambition, one must first decipher the three new cards he holds – AI5, AI6, and AI7 – and the technological paradigm shift they represent. AI5, whose design is now largely complete and was previously rumored as HW5.0, is the vanguard of this transformation. Tesla had previously predicted that AI5's performance could be 50 times higher than AI4's. Musk indicated this will be a very powerful chip, with single SoC performance roughly equivalent to Nvidia's Hopper level, while a dual-chip configuration approaches Blackwell level, but at a significantly lower cost and with markedly lower power consumption. In Musk's strategic game, the significance of AI5 extends far beyond autonomous driving. He emphasized that AI5 will not only be deployed in vehicles but also used for the Optimus robot, with Tesla's smart cars and robots set to share the same FSD algorithm and hardware in the future. It can be said that AI5 is the critical node for Tesla's strategy of achieving "shared brain" between vehicles and robots. With the rapid evolution of Tesla's humanoid robot Optimus, Musk urgently needs a universal computing core capable of simultaneously handling the high-speed mobility scenarios of cars and the complex limb control scenarios of robots. The emergence of AI5 signifies that Tesla is eliminating the hardware barrier between cars and robots, attempting to drive both wheels and legs with the same "brain." This will greatly amortize R&D costs and accelerate the reuse of data across different form-factor endpoints. If AI5 is still adding within traditional logic, then AI6 attempts to颠覆 the industry's underlying architecture. Musk defines it as a "training-inference integrated" chip, a declaration of war on existing AI infrastructure. In the current division of labor within the AI industry, chips for model training in data centers (like Nvidia's H100) and chips for inference at the edge (like the onboard FSD chip) are two entirely different species, with different precision requirements, memory bandwidth designs, and power consumption constraints. However, the subsequent AI6 attempts to break down this wall. This means the same piece of silicon could be installed in a moving car to process instantaneous road conditions, or stacked by the thousands in a data center, training the latest neural network models day and night. Once realized, Tesla would completely bridge the computing power barrier between the edge and the cloud. Every Tesla car parked in a garage could potentially become a node in a supercomputer during idle times; the imaginative space for this kind of distributed computing power is staggering. The more distant AI7 blatantly showcases Musk's interstellar ambition. This chip is explicitly directed at "space-based computing power." It will no longer be confined to the temperate environment of Earth's surface but must contend with the high radiation of cosmic rays and the heat dissipation challenges of the space vacuum. AI7's target customers are SpaceX's Starship and Starlink. In Musk's ultimate vision, future intelligence should not reside solely in fiber-connected data centers but should cover the globe and even Mars via satellite networks. AI7 would become the neurons of this space-based internet, enabling interconnected heaven-and-earth distributed computing, providing the computational foundation for humanity to become a multi-planetary species. As for the Dojo project, previously rumored to be paused due to performance shortcomings and key personnel departures, its high-profile restart indicates Musk has realized that merely having chip design capability is insufficient; one must possess a matching training cluster architecture. Dojo is seen as the cornerstone of Tesla's AI ambitions, expected to bring significant performance improvements in processing autonomous driving video data and optimizing neural network models. Morgan Stanley once estimated that if Dojo is fully deployed, it could potentially add tens of billions of dollars to Tesla's valuation.
Challenging Physical Limits In the traditional automotive industry, chip iteration cycles typically span three to five years. Even Apple, the霸主 of consumer electronics, follows an annual update rhythm. Musk's proposed "one generation every nine months" iteration cycle sounds not only疯狂 but even somewhat contrary to the physical laws of semiconductor engineering. Behind such疯狂 acceleration lie three irresistible driving forces. The primary reason is that the speed at which algorithms are consuming hardware has spiraled out of control. Tesla's current FSD full self-driving technology has fully transitioned to an end-to-end neural network architecture. This is a technical route completely different from past rule-based code; it更像是一个黑盒, where intelligence emerges through feeding massive amounts of video data. Under this architecture, every order-of-magnitude increase in model parameters leads to a qualitative leap in intelligent performance. The current reality is that the iteration speed of Tesla's software team on algorithms far surpasses the hardware's Moore's Law. If the three-year hardware cycle continues, Tesla's most advanced algorithm models will be severely bottlenecked by the computational ceiling of older chips for up to two years. Musk has直言 that Tesla's future annual demand for AI chips will be in the range of "100 million to 200 billion units." Making software wait for hardware is absolutely unacceptable strategic delay. Secondly, this is the only path to seize the time window for embodied intelligence. Musk has repeatedly asserted that the humanoid robot Optimus will become the trillion-dollar支撑 of Tesla's future market value, far exceeding the car business. Unlike cars that primarily move in a two-dimensional plane, robots need to perform extremely complex balancing, grasping, and interaction operations in three-dimensional space, with requirements for real-time computing power, low latency, and energy efficiency that are far more demanding than those for cars. Musk predicts that the next three to five years will be the critical window for the explosion of humanoid robot technology and standard setting, much like the early chaotic competition in smartphones. If Tesla cannot establish an absolute technological generation gap through rapid hardware iteration during this period, any first-mover advantage will vanish once competitors catch up. The nine-month generational sprint is essentially about building a high wall of computing power on the eve of the industry's explosion. Finally, there is the anxiety about摆脱 dependence on external computing power. Although Tesla is currently a major customer of Nvidia, Musk深知 that in the AI gold rush, the shovel-seller Nvidia holds absolute pricing power and allocation authority. As Tesla's vehicle fleet规模 approaches tens of millions and robot production is planned for hundreds of millions, if core computing power relies entirely on external procurement,高昂的硬件成本 will eat up all commercial profits. More importantly, entrusting the company's lifeline to Jensen Huang's allocation does not align with the need for security derived from Musk's "first principles." Through rapid nine-month iterations, Tesla aims to use dedicated ASIC chips to outperform general-purpose GPUs in efficiency for specific tasks, thereby掌握定价权.
Ultimate Vertical Integration Revealing the chip roadmap is just the beginning. This tech giant, who has gathered all the cutting-edge trends like AGI, autonomous driving, embodied intelligence, commercial spaceflight, and brain-computer interfaces, has proposed a new构想: building its own 2-nanometer "TeraFab" (trillion-level wafer fab). In his view, while TSMC and Samsung are seen as the industry's duopoly, possessing money-printer-like profitability, their responsiveness in capacity expansion appears sluggish. For a long time, global tech giants have mostly adopted the Fabless model, focusing only on design and outsourcing manufacturing to TSMC or Samsung. However, Musk is重新审视 this division of labor. The scars left on the automotive industry by the global "chip shortage" crisis during the pandemic have not yet healed. Those days of forced production halts due to material shortages left a lasting impression on Musk. Thus, the planned TeraFab, starting with a monthly capacity of 100,000 wafers and targeting an ultimate goal of 1 million wafers per month, was born. It represents a challenge to global semiconductor capacity, timed for the collective爆发 of the five business lines – xAI, Tesla, Optimus, SpaceX, and Neuralink – around late 2025 to early 2026. In the view of industry insiders, with self-developed chips combined with deeply tied manufacturing capability, or even its own production line, Tesla would掌握供应链的主权, no longer subject to foundries' scheduling and capacity allocation. The deeper calculation lies in the extreme compression of cost and energy efficiency. BYD's success in power semiconductors has proven that while the IDM (Integrated Device Manufacturer) model involves extremely heavy assets, once scale is achieved, its cost advantage is毁灭性的. When Tesla needs to supply chips for millions of cars, tens of millions of robots, and even tens of thousands of satellites in the future, it's not just a matter of procurement cost but also of energy efficiency optimization. Some chip industry figures point out that generic manufacturing processes often make compromises to accommodate all customers, whereas self-manufacturing chips would allow Tesla to optimize from the atomic level of transistor arrangement,剔除所有不必要的电路, retaining only the parts most efficient for running FSD and Optimus neural networks. At a time when battery energy density has yet to see a breakthrough, this kind of energy efficiency improvement derived from manufacturing工艺 directly determines the battery life of robots and the driving range of cars.
The Bet and the Future Looking past these dizzying technical terms and aggressive timelines, we see the tightly interlocking AI ecosystem闭环 that Musk is constructing. In this闭环, every环节 provides nourishment for the next,互为因果. At the forefront of this ecosystem are millions of Tesla cars driving around the world, acting like giant antennae, constantly collecting real-world physical data every moment. This data is the most valuable fuel for training AI. Simultaneously, the即将量产的 Optimus robots will extend the scope of data collection from roads to more complex indoor scenarios like homes and factories,极大地丰富了数据的维度. These massive amounts of data are continuously fed to the cloud, where the restarted Dojo supercomputer and mountains of stacked AI6 chips stand ready. They吞噬数据 day and night, training more powerful end-to-end neural network models. These models are then instantly distributed back to the cars and robots via OTA technology, making them smarter. Above all this, the Starlink satellite network, empowered by AI7 chips, not only solves communication blind spots unreachable by ground stations but, more importantly, is building a space-based computing network. In the future, when a Tesla car is driving in a desert or an Optimus robot is working in a remote mine, they could call upon computing power支援 from space in real-time, no longer limited by the performance瓶颈 of local chips. In this grand构想, chips are the lifeblood flowing through this ecosystem, and the "one generation every nine months" iteration speed is the heartbeat sustaining this庞大机体. Musk非常清楚 that the competition in artificial intelligence ultimately boils down to a competition of computing power, but more fundamentally, it is a competition of "computing power evolution speed." Whoever can turn sand into computing power the fastest, and convert electricity into intelligence at the lowest cost, will define the rules of the future. Of course, this series of激进举措 by Musk is fraught with significant risks. Building a wafer fab is particularly a "money pit" in the semiconductor industry, where investments of tens of billions of dollars might not yield an echo for years. Furthermore, abandoning the通用 Nvidia ecosystem in favor of building a closed Dojo software and hardware system carries the risk of巨大的沉没成本和时间损失 if the technical path proves misguided. However,回顾特斯拉的发展史, from insisting on a pure vision approach against conventional wisdom to even removing radar, Musk has always moved forward amidst controversy and high-stakes gambles. He is not merely building cars, nor just building robots; he is attempting, through极致掌控 of the physical底层算力, to cultivate a form of "silicon-based life" capable of self-evolution. For the global technology industry, Tesla's computing power sprint serves as both a warning bell and a call to charge. It declares that the war for AI hardware has升级 from simply "competing on specs" to a new dimension of "competing on iteration speed" and "competing on ecosystem闭环." In this war, players who cannot keep up with the rhythm might not even qualify to remain at the table. Musk is proving, in an近乎偏执的方式, that on the road to Artificial General Intelligence (AGI), only those who掌握算力主权 can hold the key to unlocking the future.
Comments