In January 2009, an anonymous entity invented something called a "token." You contributed computing power, received tokens, and these tokens circulated, were priced, and traded within a consensus network. The entire crypto economy was born from this. Over a decade later, debates still rage about whether such tokens hold any real value. In March 2025, a man in a leather jacket redefined another type of entity also called a "token." You contribute computing power, produce tokens, and these tokens are immediately consumed within an AI inference and reasoning process: thinking, reasoning, coding, making decisions. The entire AI economy is accelerating because of this. Nobody debates the value of this type of token because you likely used millions of them just this morning. Two types of tokens, the same name, the same underlying structure: computing power goes in, something valuable comes out.
By March 2026, I was seated in the
At that moment, I realized this man was doing something structurally identical to what the anonymous creator of the first token did 17 years ago.
The same set of conversion rules. The anonymous figure known as Satoshi Nakamoto wrote a nine-page white paper in 2008, designing a set of rules: contribute computing power, complete a mathematical proof (Proof of Work), and receive a crypto token as a reward. The brilliance of these rules lies in not requiring trust between parties—by accepting the rules, you automatically become a participant in that economy. The rules were correct; they managed to unite many who might otherwise deceive each other.
On the GTC 2026 stage, Jensen Huang did something structurally identical. He displayed a chart illustrating the relationship and tension between inference efficiency and token consumption: the Y-axis showed throughput (tokens produced per megawatt of power), the X-axis showed interactivity (token speed perceived by each user). Below the X-axis, he listed five pricing tiers: Free using Qwen 3, at $0 per million tokens; Medium using Kimi K2.5, at $3 per million tokens; High using GPT MoE, at $6 per million tokens; Premium using GPT MoE 400K context, at $45 per million tokens; and Ultra, at $150 per million tokens. This chart could almost serve as the cover for Jensen Huang's "token economics" white paper.
Satoshi defined "what constitutes valuable computation"—solving a SHA-256 hash collision was valuable. Jensen Huang defined "what constitutes valuable reasoning"—producing tokens for specific scenarios at a specific speed under given power constraints is valuable. Neither Satoshi Nakamoto nor Jensen Huang produced tokens directly; both defined the production rules and pricing mechanisms for tokens.
A statement Huang made on stage could be written directly into the abstract of a token economics white paper: "Tokens are the new commodity, and like all commodities, once it reaches an inflection, once it becomes mature, it will segment into different parts." He wasn't describing the current state; he was predicting a market structure and precisely aligning his hardware product line across every layer of that structure.
The production processes of the two tokens even have a semantic symmetry: mining versus inference. The essence of both mining and inference is converting electricity into money. Miners spend on electricity to mine crypto tokens and then sell them; inference models and AI Agents spend on electricity to generate AI tokens, which are then sold to developers by the million. The intermediate steps differ, but the ends are the same: input is the electricity meter, output is revenue.
Two Forms of Scarcity. Satoshi's most critical design decision wasn't Proof of Work; it was the 21 million cap on the total supply of Bitcoin. He created artificial scarcity through code—no matter how many mining rigs joined the network, the total number of Bitcoin would never exceed 21 million. This scarcity is the value anchor for the entire crypto economy.
Jensen Huang, conversely, created natural scarcity using physical laws. He stated: "You still have to build a gigawatt data center. You still have to build a gigawatt factory, and that one gigawatt factory for 15 years amortized... is about $40 billion even when you put nothing on it. It's $40 billion. You better make for darn sure you put the best computer system on that thing so that you can have the best token cost." A 1GW data center can never become 2GW. This isn't a code limitation; it's a law of physics. Land, power, cooling—each has a physical upper limit. The number of tokens a $40 billion factory can produce over its 15-year lifespan depends entirely on the computational architecture installed inside.
Satoshi's scarcity can be forked. If one dislikes the 21 million cap, they can fork a new chain, change it to 200 million, call it Ethereum or something else, and issue another white paper. People have done this repeatedly. The scarcity Huang creates cannot be forked. One cannot fork the second law of thermodynamics, a city's power grid capacity, or the physical area of a piece of land.
Yet, both Satoshi Nakamoto and Jensen Huang created scarcities that led to the same outcome: a hardware arms race. The history of mining is: CPU → GPU → FPGA → ASIC. Each generation of specialized hardware rendered the previous one obsolete. The history of AI training and inference is replaying this: Hopper → Blackwell → Vera Rubin → Groq LPU. It starts with general-purpose hardware and concludes with specialized hardware. The Groq LPU showcased by Huang at this year's GTC, a deterministic dataflow processor released after acquiring Groq, is architecturally the ASIC for the inference domain—static compilation, compiler scheduling, no dynamic scheduling, 500MB of on-chip SRAM. It does one thing exceptionally well.
Interestingly, GPUs played a key role in both waves. Around 2013, miners discovered GPUs were better than CPUs for mining crypto tokens, leading to
The World's Most Profitable Shovel. During the gold rush, the most profitable weren't the prospectors but Levi Strauss, who sold shovels. During the mining boom, the most profitable weren't the miners but Bitmain and Jihan Wu, who sold mining rigs. In the AI pre-training and inference wave, the most profitable aren't the foundation models or Agents but
Around 2018, global computing power was concentrated in a few large mining pools—F2Pool, Antpool, BTC.com—competing for hash rate share, but with mining rig sources highly concentrated at Bitmain. Similarly, today,
But Bitmain eventually faced competitors—MicroBT, Innosilicon, and Canaan chipped away at its market share. Mining rigs are relatively simple ASIC designs, giving followers a chance. Challenging
The Fundamental Fork Between Two Tokens. What fundamentally differentiates crypto tokens from AI training and inference tokens is user motivation and psychology. The demand side for crypto tokens is speculation. Nobody "needs" Bitcoin to complete work. All white papers claiming blockchain tokens can solve problems are deceptive. You hold crypto because you believe someone will buy it from you at a higher price in the future. Bitcoin's value comes from a self-fulfilling prophecy: if enough people believe it has value, it does. This is an economy of belief.
The demand side for AI tokens is productivity. Nestlé needs tokens for supply chain decisions—reducing its supply chain data refresh time from 15 minutes to 3 minutes, cutting costs by 83%, a value directly mapped to the P&L. One hundred percent of
Crypto tokens are produced to be held and traded—their value lies in not being used. AI tokens are produced to be consumed immediately—their value lies in the moment they are used. One is digital gold, increasing in value when hoarded; the other is digital electricity, produced to be burned. This distinction determines that the AI token economy will not become as bubble-prone as the crypto token economy. Bitcoin's price fluctuates wildly because speculative asset prices are driven by sentiment. But AI token prices are driven by usage volume and production costs. As long as AI remains useful—as long as people use Claude Code for coding, ChatGPT for report writing, and Agents for business processes—the demand for tokens will not collapse. It doesn't rely on belief; it relies on indispensability.
In 2008, the Bitcoin white paper needed to repeatedly argue why a decentralized electronic cash system had value. Seventeen years later, people are still debating. In 2026, token economics sparked no debate; it achieved consensus without needing justification. When Huang stood on the GTC stage and said "tokens are the new commodity," nobody questioned it. Because everyone in the audience had consumed millions of tokens using Claude Code or ChatGPT that very morning. They didn't need to be convinced tokens have value—their credit card bills had already proven it.
In this sense, Huang truly is a counterpart to Satoshi Nakamoto—the counterpart who stayed behind to monopolize "mining rig" production, define token usage scenarios and specifications, and annually host a show at the San Jose SAP Center showcasing the power of the next generation of "mining rigs" supporting AI training and inference. Satoshi possessed the enigmatic charm of cautious desire; he designed the rules, handed them to code, and vanished—a cypherpunk romance. Huang is more of a businessman than any scientist; he designs the rules, maintains them personally, continuously adds bricks and mortar, and fortifies his moat. The token you once had to believe in to see, you can now see without believing. It is the next unit after the watt, ampere, and bit.
Comments