Anjney Midha, a former general partner at Andreessen Horowitz (a16z) and an early personal investor in Anthropic, is finally ready to unveil his highly anticipated AI infrastructure startup. The venture, previously reported to be seeking over $10 billion in funding, is named AMP.
The new company, AMP, is building an "AI computing grid," a system modeled on a centralized power grid that dispatches electricity. This model aims to fundamentally change how AI developers access scarce server resources. The name AMP is appropriately derived from the unit of electric current, the ampere.
Following an in-depth discussion with Midha at NVIDIA's GTC conference, it became clear he is advancing an idea developed over several years: that AI computing power should be sold like a public utility, enabling lower costs and broader access.
OpenAI CEO Sam Altman recently discussed a similar concept at a BlackRock infrastructure summit, though he suggested OpenAI itself would become a supplier of such resources. Regardless, this represents a significant shift from the current sales models for AI infrastructure.
During his tenure at Andreessen Horowitz, Midha built a prototype for AMP called the Oxygen computing cluster, which pooled NVIDIA chips for use by the firm's portfolio companies. Concerned that AI computing resources were rapidly concentrating in the hands of a few companies with large GPU holdings, he decided to spin the project out into a standalone company, AMP.
Currently, NVIDIA Graphics Processing Units (GPUs) are typically supplied through long-term leases (reserved instances) or hourly rentals (spot instances). Midha believes this allocation method is fundamentally inefficient.
Similar to how electrical grids a century ago became crucial for sharing scarce power, AMP aims to provide the same shared model for AI developers needing servers. Midha envisions a system where AI developers no longer need to procure and maintain their own infrastructure—whether by leasing from cloud providers or purchasing from chip companies—instead using a more efficient shared system.
Midha declined to name other partners involved with AMP, whether server suppliers or computing power users, but stated that top research labs and cloud providers are already participating.
He did not disclose AMP's specific business model. Unlike companies that build and operate their own data centers, AMP will launch an application that connects server suppliers with AI developers in need of resources. Midha compares this to an Independent System Operator in the power sector—an entity that may not own the underlying infrastructure but is responsible for balancing supply and demand.
To achieve this, AMP is developing software to allocate a shared pool of computing power among AI developers and schedule the runtime and nodes for different computational tasks. However, AMP will not rent out GPUs by the hour nor charge AI developers directly.
Beyond GPUs, AMP plans to support developers in renting various types of AI hardware. Midha did not specify if Google's Tensor Processing Units (TPUs) would be included, but given that AMP's founding team includes engineers who previously managed Google's large-scale internal infrastructure systems, the capability to build such a system is plausible. Previous reports indicate Google has taken steps to make TPUs available to AI developers outside of Google Cloud.
Other companies, such as Together AI and NVIDIA itself—which previously attempted to create a marketplace for idle GPUs—also aggregate AI servers from various sources, but AMP's model is difficult to compare directly.
Midha stated, "You have to be a neutral, independent body that sets a single standard and allows everyone to participate."
AMP plans to release a mission statement later today aimed at attracting more companies to join this computing grid. It will be interesting to see which organizations ultimately participate.
Currently, AI companies often view server resources as a strategic advantage, so appropriate economic incentives would be needed to encourage them to use the AMP system or contribute their own servers.
Given Midha's close ties as an early investor to Anthropic, it is reasonable to suspect that Anthropic, the developer of Claude, might be involved in the project.
Midha declined to comment on the company's capital structure but mentioned that hundreds of millions of dollars in startup capital have been invested in the project over the past several months.
The inspiration for founding AMP came from Midha's experience working with Anthropic and other startups, where he witnessed the critical importance of servers for developing new models. Training new models by scaling computing power is a well-known principle in AI, often referred to as "scaling laws."
However, he noted that achieving computational scale in practice is difficult because developer demand for computing equipment is highly unpredictable.
"Looking at compute loads, the demand is incredibly spiky," he said. "A team's load pattern typically involves a massive peak for a training run, followed by periods of research and inference work. It's extremely hard to forecast."
This leads to AI developers either under-provisioning servers reserved from cloud providers or over-provisioning, resulting in significant idle capacity. Developers also constantly face a dilemma: whether to use scarce computing power to train better models or to run existing models to serve customers and generate revenue.
"Some of the most productive teams in global frontier research are also the most inefficient users of this most valuable resource: compute," he remarked.
In many cases, this incentivizes companies to hoard AI server chips, even when a large portion of that equipment sits idle. "That's something that deeply bothers me," he said.
Comments