Did everyone forget about DeepSeek? What Wall Street is getting wrong about Chinese AI.

Dow Jones01-24 21:30

MW Did everyone forget about DeepSeek? What Wall Street is getting wrong about Chinese AI.

By Christine Ji

Shares of U.S. hyperscalers seem to have put DeepSeek in the rearview mirror. But if you look closely, a different story emerges.

As U.S. hyperscalers prepare to spend over $600 billion on AI infrastructure this year, Chinese firms are hoping to match their results at a fraction of the cost.

One year ago, a relatively unknown artificial-intelligence lab owned by a Chinese quantitative hedge fund delivered a seismic shock to U.S. tech investors.

Delivering performance comparable to elite U.S. large language models for a fraction of the cost, the emergence of DeepSeek's R1 reasoning model erased over $750 billion from the S&P 500 index SPX on Jan. 27, 2025. Nvidia (NVDA) alone lost over $590 billion in market value, setting a record for the largest single-day stock wipeout in U.S. history, according to Dow Jones Market Data.

DeepSeek R1's success challenged the idea that AI development would require asymptotic demand for hardware and outsize profits for Nvidia - shaking the very foundation that the U.S. bull market had been built upon. And unlike most leading U.S. models, R1 is open-weight, meaning that its parameters are publicly available for anyone to download on their hard drive.

"The market was really wondering if U.S. companies got it wrong: Can you do this more efficiently, and therefore did you overspend?" Rene Reyna, Invesco's head of thematic and specialty ETF strategy, told MarketWatch.

Jeffrey Emanuel, a tech entrepreneur and former quantitative investor, wrote in a viral blog post, "The Short Case for Nvidia Stock," that DeepSeek's efficiency "suggests the entire industry has been massively overprovisioning compute resources." He warned that Nvidia's profit margins could crumble as competitors develop their own cost-efficient chips.

Read: The blogger who helped spark Nvidia's $600 billion stock collapse and a panic in Silicon Valley

But the impact receded from headlines almost as quickly as it had appeared, with the S&P 500 bouncing back to pre-DeepSeek levels by mid-February. The U.S. AI ecosystem had achieved its global dominance by deploying chips and compute at massive scale, and with a clear lead in hardware, it had little incentive to pursue China's efficiency-first strategy. There was no paradigm shift.

Hyperscalers like Google $(GOOGL)$ $(GOOG)$, Meta Platforms (META), Amazon.com (AMZN) and Microsoft $(MSFT)$ have doubled down and invested hundreds of billions of dollars since. Lingering fears of DeepSeek evaporated when investors realized the capital infusion was fueling the AI trade by further spiking demand for semiconductors, utilities, memory and cooling solutions, Reyna said.

In the year following the DeepSeek surprise, increased compute has indeed yielded more capable models that, in turn, demand even greater processing power to navigate the resulting explosion of data - a phenomenon known in the field of AI as a scaling law. The rise of more powerful AI coding agents has played a key role in this development, Emanuel told MarketWatch over email. "That's what's lifting everything higher, even though many of the market forces I discussed in my article are in fact happening," he said.

In 2026, the pressure is on for both American and Chinese companies to prove that their AI investments can translate into tangible economic value.

"A lot of the future outcomes financially hinge on a big bet many U.S. firms have made - that enormous scale will eventually result in returns that justify the investment," Graham Webster, a Stanford University research scholar specializing in Chinese technology policy, told MarketWatch. "If it turns out that enormous scale is not the key to success, then you may have a situation where Chinese models are actually more advantageous to use."

Read: Big Tech needs a staggering $1.5 trillion to fund the AI boom. This is the complex playbook it's using to get it.

'The real change has been on the Chinese side'

Where do China's AI capabilities stand today?

Anthropic CEO Dario Amodei offered a skeptical take earlier this week at the World Economic Forum in Davos, Switzerland - telling Bloomberg that DeepSeek's models are optimized to score well on technical benchmarks instead of real-world performance.

Last year's DeepSeek disruption was a "massive overreaction," according to Google DeepMind CEO Demis Hassabis. He said that China has proven it can catch up to the innovations and leaps made in the U.S., but it still struggles to innovate beyond what American firms have accomplished.

Outside of the purview of Silicon Valley, however, there is a different story.

While DeepSeek was disregarded by the U.S., "the real change has been on the Chinese side," Kyle Chan, a fellow at the Brookings Institution, told MarketWatch. R1's release led to a "complete transformation of not just China's AI industry but also China's entire tech sector," according to Chan.

DeepSeek's success accelerated the pace of development across other Chinese AI labs and cemented open-weight as the de facto standard for China's foundational models. While the "Magnificent Seven" continued to dominate the Nasdaq COMP, DeepSeek ignited a revitalization of China's capital markets. An elite group of Chinese AI startups, dubbed the "Six Tigers," have received a flood of capital from state-backed funds and tech giants like Alibaba $(BABA)$ and Tencent (HK:700) $(TCEHY)$. Two of the companies, Zhipu AI (HK:2513) and MiniMax (HK:100), made their public debuts on the Hong Kong Stock Exchange HK:HSI earlier this month.

Reyna, whose oversight includes the Invesco China Technology ETF CQQQ, told MarketWatch that "we saw huge demand." The fund, which passively tracks the FTSE China Technology Index, received over $2 billion in inflows and rose 35% in 2025.

Chinese AI firms have made inroads globally as well, largely thanks to the accessible nature of open-weight models, Chan noted. According to a December 2025 report from Stanford University's Institute for Human-Centered AI, Alibaba's Qwen model overtook Meta's Llama to become the most downloaded LLM family on the open-source AI platform Hugging Face last September. From August 2024 to August 2025, Chinese open-model developers comprised of 17.1% of Hugging Face downloads, while U.S. developers made up 15.8%.

"U.S. firms could use Chinese models on U.S. infrastructure," Stanford's Webster said. "If there is efficiency engineering built into the training and inference ... that doesn't necessarily only benefit the Chinese companies that have produced the model."

And U.S. startups have certainly taken advantage of Chinese open-weight models. Thinking Machines, the AI startup led by former OpenAI CTO Mira Murati, is a prime example: It has utilized Qwen in its core research and integrated the models into its Tinker platform, which allows developers to fine-tune open-source systems for specific enterprise needs.

Chinese firms spend maybe 20% as much as U.S. hyperscalers

In the U.S., chips and computing power have remained the defining metric of the AI race.

Hyperscalers are projected to spend over $600 billion on capital expenditures in 2026, a 36% increase year over year. This staggering infrastructure buildout has intensified a debate over whether China should have access to the U.S. chip market - with Anthropic's Amodei recently comparing such sales to "selling nuclear weapons to North Korea."

However, Chan believes the emphasis on chips will lessen in the future. Beijing is accelerating its domestic chip push to insulate its tech sector from U.S. export controls and hardware dependencies, he pointed out. Chinese officials have reportedly instructed companies to limit purchases of U.S. chips unless absolutely necessary.

Additionally, scaling laws, once considered the industry's central dogma, have come under increasing scrutiny as returns diminish. "From 2020 to 2025, it was the age of scaling," Ilya Sutskever, OpenAI's former chief scientist who now leads the AI startup Safe Superintelligence, said on a November episode of the Dwarkesh Podcast. "But now the scale is so big. ... Is the belief that if you just 100x the scale, everything would be transformed? I don't think that's true - so it's back to the age of research again."

See more: Cybersecurity stocks fall, but an analyst wonders if China fears are just 'fake news'

Even with a flood of fresh investment over the last year, Chinese tech firms have kept their ethos of efficiency: Goldman Sachs estimates that top Chinese firms will spend just 15% to 20% of the budget of their U.S. counterparts.

Chinese firms are operating under a greater sense of resource constraint, Webster highlighted. On the basis of pure computing power, the U.S. holds an indisputable edge, as Nvidia's latest Blackwell chips are estimated to be five-times more powerful than Huawei's flagship Ascend chips. Export controls on advanced lithography equipment have prevented Chinese manufacturers from fitting more transistors on their chips, resulting in more energy-intensive systems, according to Jack Gold, founder and principal analyst at J.Gold Associates.

"A massive amount of compute at OpenAI and other American companies is dedicated to next-generation research, whereas we are stretched thin," Justin Lin, who leads development of Alibaba's Qwen, said at an AI conference in Beijing earlier this month. "Just meeting delivery demands consumes most of our resources."

Hardware constraints and state-led economic policy have contributed to Chinese firms prioritizing different goals than those in the U.S, Webster noted. Beijing's "AI Plus" blueprint mandates a nationwide integration of AI to catalyze "new quality productive forces," targeting a 70% penetration rate across priority sectors by 2027.

"It's a planning perspective that's looking for concrete economic enhancements at the firm and sector level, as opposed to just developing a machine that might transcend humanity," Webster said.

(MORE TO FOLLOW) Dow Jones Newswires

January 24, 2026 08:30 ET (13:30 GMT)

Copyright (c) 2026 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Comments

We need your insight to fill this gap
Leave a comment