Alphabet (GOOGL) is reportedly working to undermine NVIDIA's (NVDA) dominance built on its CUDA software platform, with partial support from Meta Platforms (META). Sources indicate the tech giant is optimizing its AI chip, TPU, to seamlessly run the PyTorch AI framework. Notably, PyTorch has been deeply integrated with NVIDIA’s CUDA since its 2016 launch, with Meta being a key architect of this open-source ecosystem.
Google is also considering open-sourcing portions of its code to boost adoption, internally branding the initiative "TorchTPU" and allocating significant resources. The collaboration with Meta aims to enable PyTorch to run natively and efficiently on Google’s TPUs at scale, eliminating the need for developers to rewrite code when switching from NVIDIA—effectively lowering migration barriers.
Meta, one of NVIDIA’s largest clients, is actively exploring alternatives to address high costs and supply constraints of NVIDIA’s chips. Reports last month revealed discussions of a multi-billion-dollar deal where Meta would lease Google Cloud TPUs by 2026 and potentially procure the chips directly for its data centers by 2027.
Technically, Meta’s PyTorch ecosystem and Google’s TPU hardware aim to create a software pathway bypassing CUDA. NVIDIA’s AI leadership stems not just from GPU superiority but also CUDA’s status as the industry’s "standard language" for AI development.
Neither Meta nor NVIDIA immediately responded to requests for comment. A Google spokesperson confirmed the plan, citing an October announcement: "Google Cloud is committed to offering end-to-end choices, from models and accelerators to frameworks and tools. PyTorch is highly popular, and we aim for seamless TPU integration. Demand for both TPU and GPU infrastructure is growing rapidly—our focus is delivering flexibility and scale for developers, regardless of hardware preference."
At Wednesday’s close, Alphabet and NVIDIA shares fell over 3%, while Meta dropped more than 1%.
Comments