Summary
Modern AI technology relies on neural networks with exorbitant amounts of parameters, a technology which is extremely expensive to develop.
This has resulted in there being very few credible AI players that are actually in the market - far less than it may seem.
These companies are all known entities and are some of the biggest technology companies around: Amazon, Microsoft, Google, and Meta.
Across the pack, Microsoft is handedly in the lead. Yet, the AI moat is such a significant one that I wouldn't underestimate any of them over the next decade.
This article details the nature of modern AI technology, the general economics of building it, and the AI capabilities of these scale players.
Overview of AI Technology
There are far fewer credible AI stocks in the market today than you may have been led to believe. While a lot of companies want to claim an artificial intelligence capability, what they really have in place is some kind of machine learning system. Machine learning is technically a type of artificial intelligence. In essence, machine learning systems constitute a self-improving algorithm that leverages advanced statistics to get better over time. The most advanced ML systems make use of the same foundational data structure as cutting-edge artificial intelligence systems: neural networks.
Neural networks take the form of a graph. This isn't your standard x/y/z coordinate graph. Rather, this kind of graph is a set of nodes with quantified relationships between them. These kinds of graphs are used for computational learning systems in particular as they are designed to approximate the structure of the human brain. Graphs have been around for a while and are utilized in a variety of applications; they are, however, the only mathematical structure utilized for neural networks and thus for artificial intelligence.
If advanced machine learning models make use of neural networks, then aren't they AI? Technically, yes, they are - all machine learning systems are AI. The difference then is really a matter of how many parameters there are in the model. Things like ChatGPT have massive amounts of parameters. A parameter is one type of data; the more parameters, the more types of data the model can process. Larger models are so complex that the majority of these parameters/data types have no comparison to human experience. This is what results in them being 'black box' models and unable to be fully understood by people.
It is these massive neural networks that are responsible for the impressive advances that we have been seeing in the field. More specifically, the neural networks underlying the most well-known AI systems (such as ChatGPT) are considered 'large language models'. They are referred to as such because they combine massive neural networks with algorithms from the field of natural language processing (NLP). This combination allows them to process, generate, and even seemingly comprehend text.
A review of the models built at OpenAI presents some clarity as to just how large these large language models are.
Large Language Model | Number of Parameters |
GPT-1 | 117 Million |
GPT-2 | 1.5 Billion |
GPT-3 | 175 Billion |
GPT-4 | Unconfirmed. Est. 100 Trillion |
Source: OpenAI, Author's Excel Spreadsheet
As OpenAI built larger and larger models, they discovered that each system became better purely on the basis of having more parameters. GPT-3 represented an inflection point, demonstrating a level of competence that people had not seen from a computer previously and kicking off the current craze.
GPT-4 is quite new and OpenAI has not disclosed how many parameters it has, although anecdotal estimates from people in the technology sector say that it is around 100 trillion (100,000 billion). Since OpenAI has consistently been transparent with this information before, it is likely that they do not actually know how many parameters GPT-4 has.
Economics of Building AI Systems
The reality of these models is that size matters. In fact, it has become the preeminent element determining the quality of an AI system. What we can conclude from this is that artificial intelligence technology is extraordinarily expensive to build. OpenAI has raised $11B in funding just to build these first 4 models - and it is a company that focuses single-mindedly on building AI systems.
Speaking to the cost of building these systems, Amazon's CEO recently referred to this reality in an interview with CNBC:
As such, the emerging reality of this extremely expensive technology is that only a few companies are going to have it. Just as Amazon is doing with Bedrock, they are then going to license these models out to other companies for their purposes. It should then be clear who will be capturing most of the economics from AI - the companies that own the foundational models. There are only 5 firms that I can think of off the bat who have actually built AI systems that are approximately as sophisticated as those being released by OpenAI: Amazon $Amazon.com(AMZN)$ , Microsoft $Microsoft(MSFT)$ , Google $Alphabet(GOOGL)$, Meta $Meta Platforms, Inc.(META)$ , and IBM $IBM(IBM)$. The next section of this article will review the first 4 of these as well as the progress that they have made to date.
Rising Titans of Artificial Intelligence
As I mentioned, companies already need to be a certain level of size in order to have the resources to actually build these models. Furthermore, they need to have already put down the ~$10B in capital expenditure. The computing resources required to build these models are vast. Artificial intelligence software developers are scarce, in-demand, and are also very expensive. It's altogether reasonable that the list of credible players is quite small in number. I'll outline them here as well as add in some notes about the AI work that they have done thus far.
1. Amazon
Since I already mentioned Amazon it's worth starting with them. Amazon, through AWS, is already a serious and assertive artificial intelligence player. Through their Bedrock service they are already selling foundational large language models for other businesses to use and customize as they see fit.
They are also distributing models built by startups, including the high profile OpenAI competitor Anthropic. I expect them to do quite well through this initiative.
AWS has also been selling simpler AI services for some time. However, this type of narrow AI-enabled cloud service, while pioneered by AWS, has long been available through Microsoft's Azure as well as Google Cloud Services. As such it can't really be considered the leading edge any longer.
2. Microsoft
Next up is Microsoft. Microsoft is well-regarded when it comes to AI and is certainly seen as being on its front foot in this rapidly developing market. Just like AWS, Microsoft is selling access to foundational LLM technology via its Azure cloud services division. You can already leverage GPT-4 via Azure.
In addition to this, Microsoft is in the early stages of deploying AI-enabled services across its entire suite of Office products. Given the sheer ubiquity of Microsoft Office, this is a very significant development. While this project is still in pilot stage and is only being offered to a select group of enterprise clients, this will completely rewire the way that business gets done when it is released en masse.
By combining an intuitive interface with ready integration into products that most businesses already have, Microsoft is going to change the game. I recommend watching the video in the link to see exactly what I'm talking about here - I can certainly say that I am impressed. I think this is the most significant artificial intelligence offering that will be brought to market over the next year.
Finally, we should also note that Microsoft is first as to selling a useful AI product directly to consumers. Github Copilot released back in Q4 2021 and already achieved $1B in revenue in 2022.
As things now stand, Microsoft has both the broadest AI footprint as well as the most immediately identifiable revenue derived from selling pure AI products. It is the market leader.
3. Alphabet (Google)
Next up is Google. Google has long been a serious player in AI research, having bought leading AI research house DeepMind back in 2014. DeepMind has since achieved a number of industry firsts, including proving out systems that were able to beat top human players at Go, a game significantly more complex than Chess. DeepMind has also contributed to research across a variety of fields, including protein folding and cancer research. It has published extensive amounts of research and, anecdotally, is widely regarded as one of the best - if not the best - artificial intelligence laboratories that exist today.
On the commercialization side, Google is perceived to be lagging. Upon review, however, this doesn't necessarily seem to be the case. Google does sell AI services and cloud AI data infrastructure to developers via Google Cloud, just like AWS and Microsoft's Azure. It has also stepped into the market and is offering access to its proprietary LLM's via its cloud services. Furthermore it is also testing a generative AI toolkit across its Workspace products, which will provide similar capabilities to Microsoft's Office Copilot initiative.
In terms of footprint, Google has its bases covered. However, it is fair to say that it does not have the same level of immediate commercial potential as its immediate competitors in terms of AI. The simple fact that its products are highly similar to those of its peers, yet way less known about, are proof of this. Additionally Google Cloud is not yet profitable as of 2022, and expensive AI initiatives could act as a further drag on it - not exactly something they need to worry about at AWS.
Nonetheless, I would not underestimate Google over the long-term when it comes to AI. This game will take time to play out, and quality matters. I am confident that Google has best-of-breed technology. While its notoriously research-oriented culture is seeing it take more time to bring things to market, it will undoubtedly be one of the titans of AI going forward - in fact it already is. The company simply needs to prove out its ability to earn off of this technology, which its peers, especially Microsoft, are doing better at the moment.
4. Meta (META)
Meta is credible as to having models that are in the same class as its peers but has yet to commercialize them. Having been involved in AI for some time, it has open-sourced a multitude of advanced AI-enabling software technologies. pyTorch is a notable example and is the market standard for rapid deployments of small-to-medium sized AI models.
Similar to Google, Meta has the technology in place but hasn't brought it to market yet. I think the opportunity for Meta will be in generative AI deployed across its apps. This will likely take the form of allowing for users to readily customize and create content using AI. How this will play out exactly is as yet unclear, but I can assert that the company has the technology in place to make it happen.
Conclusion
If I had to rank these 4 companies in terms of where they're at as to commercializing AI, I would say:
Microsoft
Amazon
Google
Meta
Microsoft is leading the pack at the moment and should see serious growth as a result of the Microsoft Office initiative it has in the works. This could start adding billions to its bottom line as soon as this year.
Overall, I want to be clear in saying that these companies have already established moats that few other companies will have. Over the next decade I would not underestimate any of them. From doing the research for this analysis, it has become clear to me that these small-cap 'AI stocks' are anything but. They simply don't have technology that comes anywhere close to what these scale players have in place.
To that end I would say that these companies are in the best position to benefit from AI over the next decade; they are positioned to capture the lion's share of the economics. I reiterate that I would not underestimate any of them. If I had to pick one for the year ahead it would definitely be Microsoft, which is definitely a buy.
Source: Seeking Alpha
Comments