AMZN showed why a lot of AI hype in the stock market may not be justified

One of the best ways for a company to build a strong moat around their business is to commoditize their complements. In other words, increase your power in the market by making sure every supplier, every competitor, and every customer is a commodity.

A great business is the most valuable point in a supply chain and there’s rarely more than one point of high value in an industry. If one supplier or customer becomes too large or important, it diminishes your power in the market, and the point of value shifts to that power player.

We are seeing these strategic battles play out in artificial intelligence where $NVIDIA Corp(NVDA)$ , OpenAI, $Microsoft(MSFT)$ $Meta Platforms, Inc.(META)$ $Alphabet(GOOG)$ $Alphabet(GOOGL)$ $Amazon.com(AMZN)$ fight to be the point of value and integration in AI. They each need to commoditize their complements.

The question is, who will succeed?

This week, Amazon announced the wide availability of custom AWS chips (Trainium2 and Inferentia2) and new foundation models (Nova) for artificial intelligence. The idea for AWS is to offer a wide variety of chips and models to thousands of customers — including its own as a backstop to supplier power and price gouging — commoditizing both models and chips.

Once seen as a laggard in AI, Amazon may play a big role in commoditizing some of the hottest companies in AI. This could lead to lower margins and growth rates for complements, which I think will disappoint investors.

When Hyperscalers Push Back

The story in AI over the past two years has been about the rapid growth in AI and NVIDIA’s dominance in the AI data center market. When ChatGPT hit the scene in November 2022, NVIDIA was the one company prepared to take advantage and it has done that with not only rapid revenue growth but also a massive flex of pricing power.

NVIDIA can flex that power because it was essentially the only company optimizing for AI before 2022. But NVIDIA’s main customers — Meta, Google, Amazon, and Microsoft — don’t want to be beholden to NVIDIA in AI and see the company’s 75% margin as a sign they’re losing power in the market.

Meta, Google, Amazon, and Microsoft all have the same incentive. They want to be the point of value, not a complement.

So, what do they do?

1. Commoditize AI Models

OpenAI is the best-known AI model company today, but it doesn’t have any real lead over the competition.

Amazon has invested $8 billion in Anthropic, which is arguably a better model maker than OpenAI.

Google’s Gemini models have similar capabilities to competitors and are improving rapidly.

Meta has open-sourced Llama, which is seen as an industry leader.

Mistral, xAI, and others are also producing similar LLMs as OpenAI.

If there’s no clear advantage of one LLM to another or multi-modal models or whatever is next, they’re all essentially a commodity. A customer may choose one over the other for various reasons, but the hyperscalers will offer them all (Gemini will likely stay on Google Cloud) so there’s no pricing or technology power in the models themselves.

AI models are a commodity!

2. Commoditize AI Chips

What isn’t a commodity is chips. NVIDIA is the dominant chip company today. That’s because hyperscalers were caught off guard by AI.

But every hyperscaler has the incentive to change that.

AWS has introduced Trainium2 (Trainium3 is coming next year) and Inferentia2.

Google has its custom TPUs (tensor processing units) performing its training and inference.

Meta has developed the Meta Training and Inference Accelerator (MTIA) chips for their internal workloads.

Microsoft has developed the Maia 100 and Cobalt 100 chips for its AI workloads.

And of course, AMD, Intel, Qualcomm, Apple, and many others are developing their own chips.

No one wants to be beholden to NVIDIA in tech. And they’re running as fast as they can to make sure they aren’t. We aren’t “there” yet, but in a year or two I think we’ll see NVIDIA’s growth slow significantly and margins will normalize, which we may already be seeing above. NOTE: To the benefit of TSMC.

3. Abstract Away NVIDIA’s Point of Leverage

The final piece is Cuda. I’m not an AI developer (so forgive any slightly incorrect details below) but Cuda is the software layer with common toolkits that NVIDIA gives to developers to leverage NVIDIA’s chips.

Cuda is to NVIDIA chips what iOS is to the iPhone. Sure, the iPhone is probably better hardware than competitors, but that advantage is marginal. It’s the operating system that ties it together and drives value.

How do you abstract away Cuda and NVIDIA?

If you’re a hyperscaler, you build software that abstracts away Cuda (or replaces it) so developers don’t have to learn Cuda. They just use your systems. What happens underneath doesn’t matter.

This has been happening in software for decades. In the early 2000s, I learned to write HTML to make my own websites. Now, I am writing this newsletter, which gets posted on as a website, and I don’t write a single line of code.

This abstraction is an attempt to commoditize NVIDIA’s software advantage and EVERYONE is doing it.

Amazon’s Pitch to Commoditize AI complements

I’m using Amazon as a stand-in here, but its strategy is similar to every other hyperscaler.

Amazon’s Pitch to Developers: We are happy to supply you with access to an NVIDIA cluster or any other chip you want, but our custom silicon will set a price floor and the building blocks of AWS will allow you to access all of your data on AWS and use any model you choose.

Amazon has customers at scale and then they offer those customers lots of different building blocks, which then become commodities. Some of those building blocks are vertically integrated, but Amazon doesn’t really care if you use Trainium chips as long as you’re on AWS. Trainium, Inferenta, and Nova are all about creating leverage over suppliers.

Maybe one customer chooses the Llama model, another chooses Nova, and another likes Anthropic. Some customers choose NVIDIA chips, but others want lower costs and choose AWS Inferentia, and still others want an AMD chip.

Google is following a similar path. It has NVIDIA chips, but it’s also been building TPUs for over a decade and they’re a viable alternative to NVIDIA.

And all of this is possible because…

The End User Doesn’t Care

When you pull your iPhone out of your pocket to send a text message, you’re telling the world something about your choice in consumer electronics. Physical devices matter because we’re telling the world about our taste, affluence, or what “team” we’re on.

When an application like an AI call center agent, grammar tool, chatbot, etc provides you with an AI-generated answer, there’s no taste or affluence involved. As a user, you just want the answer/experience you’re looking for and the developer just wants to get the job done balancing model capability with speed, cost, and efficiency. But the core model or chip simply doesn’t matter to the end user.

And this will be the battle to watch over the next decade in tech.

What’s the point of the most value?

Today, it’s NVIDIA and they’re the common chip across hyperscalers, model developers, and applications. But will that status quo remain?

I think we’re seeing that models are now commodities.

Hyperscalers are trying to make chips more commodity-like and they have tens of billions of dollars at stake to make it happen.

At the end of the day, we may see AI innovation lead us to the same place we were two years ago and the hyperscalers will win again, commoditizing all of their complements.

Amazon made a compelling case that’s how the future plays out.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Report

Comment1

  • Top
  • Latest
  • KSR
    ·12-06
    👍
    Reply
    Report