Nvidia (NVDA) To Benefit From Advantage In Generative AI Value Chain
An entire ecosystem, from hardware providers to application builders have come out. Thanks to Generative AI, and this will contribute to bring out the potential for business to move from experimental to seeing the fruits of their investments.
Generative AI Value Chain
As we have seen over the last 2 years, development and deployment of AI systems getting more and more popular, in particular, Generative AI systems, according to Mckinsey article, a new value chain is emerging to support the training and use of this powerful technology.
If you are someone who is familiar with the traditional AI value chain, you would find it pretty similar. The common six top-level categories consists of computer hardware, cloud platforms, foundation models, model hubs and machine learning operations (MLOps), applications, and services—only foundation models are a new addition.
If we look at the chart below, I do agree that there are opportunities across the generative AI value chain, but the most significant is building end-user applications.
Why Fine-tuned models is used to build better application
As a developer myself, I personally would think that if we leverage fine-tuned foundation models (for example, those that have been fed additional relevant data or had their parameters adjusted) , this could help to speed up the time to deliver outputs for a particular use case.
While training foundation models requires massive amounts of data, is extremely expensive, and can take months, fine-tuning foundation models requires less data, costs less, and can be completed in days, putting it within reach of many companies. Application builders may amass this data from in-depth knowledge of an industry or customer needs.
In this article, I will be sharing the resources that NVIDIA NGC AI Development Catalog provide, oso this would be something that might our view of $NVIDIA Corp(NVDA)$ as only hardware manufacturer and provider.
Resources from the NVIDIA NGC AI Development Catalog.
In order to use these resources, you would need to sign up for a free NGC developer account to access the following.
The GPU-optimized NVIDIA containers, models, scripts, and tools used in these examples
The latest NVIDIA upstream contributions to the respective programming frameworks
The latest NVIDIA Deep Learning and LLM software libraries
Release notes for each of the NVIDIA optimized containers
Links to developer documentation
This would be useful as organizations could also leverage proprietary data from daily business operations. Together with what Nvidia provides, they can customise it to their company business functions.
ChipNeMo is a specially tuned spin on a large language model. It starts as an LLM made up of 43 billion parameters that acquires its skills from one trillion tokens—fundamental language units—of data.
That took two more steps. First, that already-trained model was trained again on 24 billion tokens of specialized data. Twelve billion of those tokens came from design documents, bug reports, and other English-language internal data accumulated over Nvidia’s 30 years work designing chips.
The other 12 billion tokens came from code, such as the hardware description language Verilog and scripts for carrying things out with industrial electronic design automation (EDA) tools.
Finally, the resulting model was submitted to “supervised fine-tuning,” training on 130,000 sample conversations and designs.
NVIDIA delivers generative AI through new laptops, GPUs and tools
At CES 2024, NVIDIA unveiled an array of hardware and software aimed at unlocking the full potential of generative AI on Windows 11 PCs.
Running generative AI locally on a PC is critical for privacy, latency and cost sensitive applications. At CES, NVIDIA is bringing new innovations across the full technology stack to enable the generative AI era on PC. RTX GPUs are capable of running the broadest range of applications, with the highest performance. Tensor Cores in these GPUs dramatically speed up AI performance across the most demanding applications for work and play.
Basically, this is what the trend would look like for Generative AI, end user computing focus.
Summary
Based on what we have see from CES 2024, Nvidia graphic cards are still popular among developers running their local workload.
This is the latest Top 5 in 2024 (till date), we can see that Nvidia occupied 3 spots out of 5. In terms of value for money and popularity, Nvidia score pretty well.
Appreciate if you could share your thoughts in the comment section whether you think Nvidia would be able to make a comeback with demand from consumers running Generative AI workloads on end user computing devices because of privacy and security.
@TigerStars @Daily_Discussion @Tiger_Earnings @TigerWire appreciate if you could feature this article so that fellow tiger would benefit from my investing and trading thoughts.
Disclaimer: The analysis and result presented does not recommend or suggest any investing in the said stock. This is purely for Analysis.
Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.
- PageDickens·01-17NVDA will crush the competition in generative AI!LikeReport
- Aqa·01-17Lijed and shared. NVDA for long.LikeReport
- Taurus Pink·01-17[开心] [开心]LikeReport
- Jim1995·01-17This is fantastic news for NVDA!🚀LikeReport
- Brando741319·01-17Good⭐️⭐️⭐️LikeReport
- LEESIMON·01-17🩷GoodLikeReport
- KSR·01-17👍LikeReport