Token Economy: Emergence, Challenges, and Governance Strategies

Deep News04-13

The Chinese translation for "Token" was officially announced as "词元" (Cíyuán) by Liu Liehong, Director of the National Data Administration, at the China Development Forum 2026 on March 23, 2026. This designation marks the entry of Token-based intelligent economies from the technological domain into national strategic focus. The reason behind this move is the explosive growth in Token usage, which has surged dramatically within less than a year, particularly in recent months. This reflects the rapid and comprehensive implementation of the "AI+" initiative and the swift, large-scale adoption of AI agents. This trend signifies a profound transformation in production factors and a fundamental restructuring of value creation logic, giving rise to the new economic form known as the Token economy.

Analyzing the unique economic attributes of Tokens is crucial not only for interpreting current growth drivers but also for proactively understanding the economic principles of the intelligent era. Systematically studying the Token economy and exploring its theoretical challenges and new governance imperatives is of paramount strategic importance for China to seize the high ground in intelligent economic development and lead the transition to new quality productive forces.

**1. Token Usage Accurately Reflects AI Application Levels** The attention on Tokens stems from their exceptionally rapid growth in usage. According to National Data Administration monitoring, China's average daily Token usage surged from approximately 100 billion in early 2024 to 140 trillion by March 2026, a 1,400-fold increase in 26 months. Compared to the 100 trillion at the end of 2025, it grew over 40% in just three months. Globally, China is the largest user of Tokens, with usage nearly double that of the United States, which ranks second. Data from OpenRouter, the world's largest AI model API aggregation platform, shows that from March 16 to 22, 2026, global large AI model usage totaled 20.4 trillion Tokens. Chinese AI models accounted for 7.359 trillion Tokens, or 36% of the global total, exceeding U.S. usage of 3.536 trillion Tokens for the third consecutive week. J.P. Morgan even predicts that from 2025 to 2030, China's Token consumption will see a compound annual growth rate of 330%, a 400-fold increase over five years. China's AI inference Token consumption is projected to grow from about 10 quadrillion in 2025 to approximately 3,900 quadrillion by 2030.

The importance of Tokens is further highlighted by their role as the output unit and pricing benchmark for digital-intelligent services. Previously, the output unit and pricing benchmark for information services were bytes. With the advent of large AI models, the Token has become the output unit for services provided by these models and has consequently developed into the pricing benchmark.

The rapid growth in Token usage is a result of the swift development of AI agents. The year 2026 is widely regarded by industry as the "first year of large-scale AI agent application." Due to breakthroughs in large language model (LLM) technology, AI agents have achieved significant leaps in deep language understanding, knowledge reasoning, and code generation capabilities. This has led to their large-scale deployment across various application scenarios, creating substantial economic value and even giving rise to "digital employees" capable of autonomous planning, tool utilization, and continuous learning. The rapid advancement of agent technology has led to a proliferation of various AI agent products, creating a vibrant and competitive landscape within just a few months. It is concluded that AI agents have rapidly evolved from a cutting-edge technological concept into a clearly structured, ecologically active, multi-hundred-billion-yuan emerging industry.

Token usage can serve as an indicator of AI application levels and plays a crucial role in implementing the 15th Five-Year Plan. The development and application of intelligent technology and AI are highly emphasized in the national 15th Five-Year Plan outline. The outline proposes adhering to an intelligent, green, and integrated direction for economic development focused on the real economy. It lists AI technology as a top priority for frontier technology research, emphasizing the need to deepen research on key algorithms like explainability and decision-making. The outline also explicitly calls for accelerating digital-intelligent technological innovation, deepening the "AI+" initiative, fully implementing the "AI+" action, strengthening the integration of AI with technological innovation, industrial development, cultural development, social welfare, and governance, seizing the high ground in AI industrial application, and empowering all industries.

The extremely rapid growth of Tokens is a unique economic phenomenon, underscoring the necessity and urgency of in-depth research into the Token economy.

**2. The Unique Economic Attributes of Tokens** Tokens possess economic attributes distinct from traditional factors of production. Essentially a data type, data itself is already a unique factor of production compared to labor, capital, management, and technology. As a data type, Tokens inherit the inherent properties of data as a production factor while exhibiting their own uniqueness. Deeply analyzing the special economic attributes of Tokens means exploring what sets the intelligent economy apart, potentially signaling a paradigm shift for classical economic theory. Understanding these attributes will enable policymakers to assess situations and make decisions more accurately.

**(A) Tokens' Evolution from Technical Parameter to Economic Unit** Technically, Tokens originated as a standardized data type, the basic unit for discretizing and structuring continuous text in natural language processing (NLP). This technical definition, while seemingly simple, carries profound economic significance. In traditional information processing, data often existed in unstructured forms, requiring complex manual processing and interpretation to convert into structured data before economic value could be generated and realized. Tokens emerged through a similar process, where various unstructured languages are processed into measurable, exchangeable standardized units, enabling the measurability and tradability of information value—a key prerequisite for data factor marketization.

The economic attributes of Tokens as a data type manifest in three dimensions. First, Tokens enable the standardized encapsulation of information. Whether text, code, or multimodal information, all are ultimately converted into unified Token sequences, making intelligent services from different sources and formats comparable and substitutable. Second, Tokens establish a precise measurement system. The National Data Administration's ability to monitor China's average daily Token usage relies on major model platforms already using Tokens as a standard statistical unit, displayed to users and used as the basis for charging. Finally, a transparent pricing mechanism has formed based on Tokens. Currently, mainstream models typically price usage per million Tokens, a level of price transparency difficult to achieve in traditional data transactions.

The economic attribute of Tokens is also evident in the simultaneous emergence of their product and commodity properties. Unlike traditional data products that are produced first and consumed later, Tokens are "produced" instantaneously when a user invokes a model, representing typical "on-demand generation" of data. This characteristic gives the Token economy extreme demand responsiveness and resource utilization efficiency. When a user poses a question, the model does not retrieve a pre-stored answer but dynamically generates a Token sequence appropriate to the context. This means the value of a Token depends not only on its informational content but also heavily on the specific context and user需求 at the moment of generation, exhibiting high situational dependency and value uncertainty.

From an industrial practice perspective, Tokens have already formed a complete technology stack and ecosystem. The foundation layer consists of neural network architectures like Transformer; the middle layer comprises large models such as GPT, DeepSeek, and Wenxin Yiyan (Ernie); the upper layer consists of various applications based on API calls. This layered architecture enables a specialization of labor between Token production (model training) and consumption (API calls). The massive fixed costs invested during the training phase (e.g., GPT-4 training cost exceeding $100 million) are amortized over vast numbers of API calls, ultimately driving marginal costs toward zero (cost to generate a million Tokens is less than $1). It is this unique data production and consumption model that gives the Token economy its distinctive structural feature of "high fixed costs and near-zero marginal costs."

**(B) Tokens Share Characteristics of Non-Traditional Data Factors** When we elevate our perspective from "data type" to "factor of production," data (including its specific form, "Tokens") reveals fundamental economic differences from traditional factors (land, labor, capital). These differences form the theoretical cornerstone for understanding the Token economy.

First, data as a factor of production is non-rivalrous and infinitely reusable. Traditional factors are rivalrous; one person's use prevents simultaneous use by another. Data factors are non-rivalrous; one person's use does not affect another's. In the Token economy, the same trained large model can simultaneously serve millions of users. Each user receives a Token sequence uniquely "generated in real-time" by the model based on their specific query, yet the model's knowledge and capabilities are not depleted. This non-rivalry allows the marginal cost of the data factor to approach zero infinitely after reaching a certain scale, fundamentally颠覆ing traditional value theory based on scarcity. This explains why leading model providers can continuously implement significant price reduction strategies—essentially, through extreme economies of scale, amortizing astronomical fixed training costs (hundreds of millions to billions of dollars) over massive call volumes.

Second, data factors exhibit positive externalities and reinforcing network effects. The input-output relationship of traditional factors typically follows the law of diminishing marginal returns. Data factors possess strong positive externalities; their value increases exponentially with the scale of use and expansion of application scenarios, creating a "data flywheel effect." In the Token economy, this manifests as: more user calls -> generate more high-quality interaction data -> used to optimize the model -> improved model performance attracts more users. This is a self-reinforcing positive feedback loop. This makes it difficult for the market to converge to a static equilibrium, instead favoring dynamic disequilibrium structures like "winner-take-all" or oligopoly.

Third, the value realization of data factors is highly context-dependent and non-standardized. The value of land and capital is relatively stable, and labor value is primarily determined by market wage rates. The value of data factors depends heavily on the specific application context; the same dataset can yield economic value differing by orders of magnitude in different scenarios. For the Token economy, this means that an identical output of one million Tokens can create vastly different economic value and social utility depending on whether it is used for casual conversation, programming assistance, or medical diagnosis. This extreme value uncertainty renders traditional pricing theories based on "production cost" largely ineffective. Consequently, we see Token market pricing普遍 abandoning marginal cost pricing in favor of strategies like "value-based pricing" (based on output utility) and two-part tariffs (subscription fee + usage fee).

Fourth, data factors exhibit nonlinear input-output relationships and threshold effects. Input and output for traditional factors usually show a continuous, smooth relationship. For data factors, especially those used to train large models, there is a significant nonlinear relationship and threshold effect between input and output. In model training, data volume, algorithm complexity, and computing power input must simultaneously reach a critical point before model capabilities make a qualitative leap (e.g., the transition from GPT-3 to GPT-4). This "emergent" capability cannot be explained by simple marginal increments. This poses challenges for macroeconomic analysis; traditional production functions (like the Cobb-Douglas function) cannot capture the independent contribution of "intelligent capital." If "intelligent capital" is introduced as a factor in the production function, its nonlinear dynamic characteristics must be considered and handled.

**(C) Tokens as a Unique Data Factor with Distinctive Traits** Tokens are a dominant, standardized, and priced data type among data factors. While possessing the universal attributes of data factors, they also exhibit many extreme and explicit special economic properties. It is these specificities that allow the Token economy to pose the most direct challenge to real-world markets and traditional theory.

First, the cost structure of Tokens is extremely "L-shaped," representing the ultimate form of economies of scale. The non-rivalry of data factors materializes in the Token economy as an extremely exaggerated "L-shaped" cost curve, a fundamental颠覆ion of the traditional "U-shaped" cost curve. The model training phase involves a one-time, astronomical fixed cost investment (covering computing power rental, data procurement, algorithm R&D). However, once the model is trained and enters the inference service phase, the marginal cost of generating each additional Token (mainly electricity and equipment depreciation) is very low and essentially constant. This leads to potentially the most extreme economies of scale in human economic history: the larger the user base and call volume, the lower the unit cost分摊 of the huge fixed costs. The average cost curve continues to decline, with no visible point of diseconomies of scale arising from increased management complexity as seen in traditional manufacturing. This perfectly explains the phenomenon between 2024 and 2026 where the prices of mainstream model APIs plummeted by 60%-80% while total market revenue doubled—essentially, an inevitable price reduction after scale expansion amortized fixed costs, coupled with market expansion.

Second, the Token market is dynamically non-equilibrium, exhibiting a "winner-take-all" lock-in effect. The network effects of data manifest in the Token economy as powerful market lock-in effects and path dependence. Due to the data flywheel (more use leads to better models) and development ecosystem dependency (more applications built on a specific API), once a leader emerges in the market, its advantage can continuously self-reinforce. The market struggles to converge to a static competitive equilibrium, instead remaining in a state of dynamic disequilibrium. Currently, the foundational model layer has formed extremely high barriers in computing power, data, and talent, with fewer than 10 core players globally, presenting an oligopolistic structure. In contrast, the application layer, benefiting from open-source models and fine-tuning technologies, exhibits a fully competitive long-tail market. The competition paradigm has shifted completely from price wars to a battle over "intelligent price-performance ratio." User decision functions have changed, with willingness to pay a premium for performance improvements, forcing vendors to shift their competitive focus from cost control to a continuous technological "arms race." Giants controlling the underlying model APIs essentially hold the "gateway" to the intelligent economy, with their profit model shifting from selling products to collecting a "intelligence tax" with pricing power.

Third, Token value realization creates a "deflation-inflation" split, leading to macroeconomic policy dilemmas. The context dependency of data externally manifests at the macro level as an unprecedented structural split in price levels, a "cost deflation-resource inflation" paradox. On one hand, there is service-side deflation. AI (via Token calls) drastically reduces the production costs of knowledge-intensive services. Prices for machine translation, code generation, copywriting, etc., have plummeted, bringing enormous "technology dividends" and implicit consumer welfare gains to society. On the other hand, there is resource-side inflation. The physical resources underpinning AI operation—high-end GPUs (like the H100), high-bandwidth memory (HBM), data center electricity—are experiencing explosive demand growth, driving their prices continuously upward. In 2025, the growth rate of China's AI computing power demand (approximately 150%) far exceeded the supply growth rate of high-end GPUs (approximately 40%), with the supply-demand gap pushing costs higher. This split means traditional inflation indicators (like the CPI) could be severely "distorted," failing to reflect the real economic cost structure. Monetary policy thus faces a dilemma: if it tightens to curb resource inflation, it may hinder or delay the burgeoning AI service innovation and deflationary dividends; if it does nothing, computing power cost pressures might eventually传导 through the entire economic system.

Fourth, there is a "triple failure" in the accounting and statistics of Tokens as a factor. First, accounting failure. Corporate purchases of API services (forming "intelligent capital") are often recorded as expenses rather than assets in financial statements, distorting the true value of enterprises and macro-level investment data. Second, statistical failure. The vast consumer surplus created by大量 zero-marginal-cost AI services (e.g., free API tiers, open-source models) is completely excluded from GDP calculations because it cannot be priced, leading to a systematic underestimation of real social welfare in the digital age. Third, regulatory failure. Traditional antitrust tools, based on market share and price discrimination, struggle to address "algorithmic power" monopolies, such as control over key API interfaces and ecosystem lock-in. Furthermore, regulation based on the cost-plus logic of physical goods is almost ineffective in digital service markets where marginal costs approach zero.

**3. Challenges and Recommendations Stemming from Token's Unique Economic Attributes** The unique economic attributes of Tokens pose systematic challenges to the existing economic governance framework. Urgent innovation is needed in three dimensions—statistical monitoring, competition policy, and theoretical paradigms—to establish a governance system better suited to development needs.

First, improve the statistical monitoring system. The current System of National Accounts (SNA) struggles to capture the true value of the Token economy. Corporate API calls are recorded as intermediate consumption rather than capital formation, leading to an underestimation of "intelligent capital" investment. The huge consumer surplus created by大量 zero-priced AI services falls completely outside GDP statistics, causing a systematic underestimation of real welfare in the digital economy era. With Token usage growing at double-digit monthly rates, the existing statistical system cannot accurately reflect the economic significance of this growth. A compatible statistical framework should be constructed, including a monthly statistical release system covering core indicators like Token usage volume, computing power utilization rates, and model performance-price ratios.

Second,健全 relevant competition policies. Traditional antitrust theory focuses on market share and price discrimination, whereas monopoly in the Token economy may manifest as exclusive control over key model API interfaces. Regulatory tools based on the "cost-profit" logic of physical goods are also unsuitable for digital service markets characterized by near-zero marginal costs and strong network effects. The regulatory focus needs to shift from "structural remedies" like breaking up firms towards "behavioral remedies" and "algorithmic governance" that ensure fair access and demand algorithm transparency and explainability. A new competition policy paradigm adapted to the Token economy and intelligent economy must be established.

Third, innovate economic theory research. The Token economy exposes multiple limitations of classical economic paradigms. For example: production functions fail to capture the contribution of intelligent capital; equilibrium theory struggles to explain dynamic non-equilibrium states; marginal cost pricing fails when marginal costs approach zero; Coase's theory of the firm's boundaries is challenged by AI agents reducing transaction costs. Theory serves practice; economics needs to develop new analytical frameworks capable of handling non-rivalrous goods (models), rivalrous resources (computing power), positive externalities (data network effects), and dynamic increasing returns to scale.

The Token economy is a new phenomenon. We should use scientific statistics to accurately observe the real economic situation and compensate for blind spots in traditional accounting. The state should coordinate the development of a robust computing power infrastructure foundation to prevent key resource monopolies from stifling innovation. An adaptive regulatory framework should be established to manage new risks and better balance regulation with development. Basic theoretical innovation should be encouraged, and active participation in shaping international rules is necessary to compete for discursive power in the era of the intelligent economy. Only through systematic, forward-looking institutional innovation can China's advantages—its massive market scale, rich application scenarios, and vast data resources—be effectively transformed into technological, industrial, and regulatory advantages in the AI era.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment