The beginning of 2026 saw a global developer community phenomenon with the rise of "OpenClaw," a lobster-themed project. Within just over a month, spurred by the "lobster effect," AI firm MiniMax is rapidly transitioning from a technical validation phase to a scale monetization phase, thanks to its deep integration with the OpenClaw ecosystem.
According to a recent Morgan Stanley research report, the commercialization momentum of Chinese AI unicorn MiniMax has far exceeded expectations. The company's Annual Recurring Revenue (ARR) surged from $100 million to $150 million in just two months, marking an increase of over 50%.
Concurrently, token usage for the M2 model in February 2026 skyrocketed sixfold compared to December 2025, while the inference cost per token saw a simultaneous sharp decline of over 50%. The bank maintained its Overweight rating on the company with a target price of HKD 990, suggesting approximately 23% upside from the current share price.
**ARR Jumps 50% in Two Months: Commercialization Enters Fast Lane**
MiniMax's commercial performance has impressed the market. The Morgan Stanley report indicates the company's ARR climbed rapidly from $100 million in December 2025 to $150 million by February 2026, a growth rate exceeding 50% in a short span. This acceleration is attributed to a simultaneous boom across multiple business lines within its open platform:
* Coding Plan contributes a significant proportion of the open platform's revenue, has already achieved a positive gross margin, and has deepened MiniMax's relationship with global developers, promoting widespread adoption of its models in multimodal scenarios. * OpenClaw, Coding Plan, and other cloud APIs were the core drivers of strong token usage growth from January to February, with these applications heavily utilizing the M2.1 and the newer M2.5 models. * Significant potential remains for increasing Average Revenue Per User (ARPU) for AI-native products, with several levers yet to be activated. The company's current priority is expanding its user base to build a foundation for future monetization.
Notably, the company has paused updates for its Hailuo video generation product beyond version v2.3, reallocating all resources to the research and development of its next-generation architecture, which focuses on end-to-end output, multimodal input, and long-form video generation. The v3 series is expected to significantly enhance its market competitiveness.
**M2 Token Usage Grows Sixfold: Scale Effects Drive Sharp Cost Reduction**
The explosive growth in token usage not only confirms robust demand but has also directly led to cost optimization on the supply side. The report disclosed that M2 model token usage in February 2026 grew six times compared to December 2025, while the inference cost per token decreased substantially. Morgan Stanley attributed this cost reduction primarily to two drivers:
* Significant optimization headroom remains: Theoretically, the inference cost per token for M2 could still be halved. Current efficiency gains stem from algorithm optimization rather than reductions in hardware costs. * Scale effects are materializing: As token volume increases, improvements in computational load balancing and utilization rates are further reducing marginal costs.
On the computing power procurement side, MiniMax's large and rapidly growing demand volume gives it strong bargaining power with suppliers, ensuring favorable pricing. Furthermore, its diversified revenue structure helps hedge against price war risks in specific scenarios. The report notes that management is explicitly optimistic about long-term gross margin improvement, believing that enhanced pricing power from model capability upgrades, continuous optimization, and scale expansion will collectively support a sustained upward trend in gross margins.
**M3 Model: A Generational Leap from "Cost-First" to "World-Class"**
MiniMax's assessment of its long-term potential remains firmly anchored to the intelligence level of its models. The report indicates that the upcoming M3 series is targeted at achieving genuine top-tier global capabilities. This contrasts sharply with the positioning of the M2 model, which was optimized for cost-effectiveness and speed under the resource constraints of its time. The M3 represents a leapfrog improvement in model capability, achieved after the company accumulated sufficient talent, computing resources, and data, rather than a gradual catch-up effort.
Management emphasizes that the design of each model generation—including functional positioning, target market scale, and profit margin structure—is planned in advance. The M3 has been explicitly designed to deliver higher gross margins than existing models. According to the report, MiniMax considers inference speed a key differentiating competitive factor and will continue seeking the optimal balance between cost and speed. Additionally, optimization achievements in its Large Language Model can also enhance the efficiency of its multimodal models, creating technological synergies.
**Platform Value Redefined: Developer Ecosystem as Core Moat**
MiniMax's definition of a "platform company" differs fundamentally from that of traditional internet platforms. Morgan Stanley states the company has clearly expressed that its platform value does not lie in controlling internet traffic or acting as a consumer gateway, but in pushing the boundaries of intelligence. In MiniMax's framework, genuine platform value emerges when new intelligent capabilities give rise to entirely new product categories. The report points out that the company's coding products and Agent pipeline are currently demonstrating strong growth momentum, validating this logic.
In building user stickiness, MiniMax emphasizes that rapid and continuous model iteration is foundational, as developers tend to choose platforms with fast and reliable model advancement. Morgan Stanley believes this means MiniMax's competitive moat is essentially a race centered on the "speed of model evolution"—the entity that can consistently maintain a technical lead will be able to lock in the developer ecosystem.
Comments