+Follow
SUYADI
No personal profile
9
Follow
0
Followers
0
Topic
0
Badge
Posts
Hot
SUYADI
2023-12-02
I love the produc Nvidia
Can Nvidia Hit $500 Before 2024?
Go to Tiger App to see more news
{"i18n":{"language":"en_US"},"userPageInfo":{"id":"4164615851577982","uuid":"4164615851577982","gmtCreate":1701512726484,"gmtModify":1701517830329,"name":"SUYADI","pinyin":"suyadi","introduction":"","introductionEn":"","signature":"","avatar":"https://community-static.tradeup.com/news/default-avatar.jpg","hat":null,"hatId":null,"hatName":null,"vip":1,"status":2,"fanSize":0,"headSize":9,"tweetSize":1,"questionSize":0,"limitLevel":999,"accountStatus":1,"level":{"id":0,"name":"","nameTw":"","represent":"","factor":"","iconColor":"","bgColor":""},"themeCounts":0,"badgeCounts":0,"badges":[],"moderator":false,"superModerator":false,"manageSymbols":null,"badgeLevel":null,"boolIsFan":false,"boolIsHead":false,"favoriteSize":0,"symbols":null,"coverImage":null,"realNameVerified":"init","userBadges":[],"userBadgeCount":0,"currentWearingBadge":null,"individualDisplayBadges":null,"crmLevel":1,"crmLevelSwitch":0,"location":null,"starInvestorFollowerNum":0,"starInvestorFlag":false,"starInvestorOrderShareNum":0,"subscribeStarInvestorNum":0,"ror":null,"winRationPercentage":null,"showRor":false,"investmentPhilosophy":null,"starInvestorSubscribeFlag":false},"baikeInfo":{},"tab":"post","tweets":[{"id":247736566886616,"gmtCreate":1701520484799,"gmtModify":1701526331724,"author":{"id":"4164615851577982","authorId":"4164615851577982","name":"SUYADI","avatar":"https://community-static.tradeup.com/news/default-avatar.jpg","crmLevel":1,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4164615851577982","authorIdStr":"4164615851577982"},"themes":[],"htmlText":"I love the produc Nvidia","listText":"I love the produc Nvidia","text":"I love the produc Nvidia","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":6,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/247736566886616","repostId":"2388542962","repostType":4,"repost":{"id":"2388542962","kind":"highlight","pubTimestamp":1701475120,"share":"https://ttm.financial/m/news/2388542962?lang=&edition=fundamental","pubTime":"2023-12-02 07:58","market":"us","language":"en","title":"Can Nvidia Hit $500 Before 2024?","url":"https://stock-news.laohu8.com/highlight/detail?id=2388542962","media":"seekingalpha","summary":"Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high valuati","content":"<html><head></head><body><ul style=\"\"><li><p>Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high valuation.</p></li><li><p>Nvidia stock's latest pullback is likely to adjust for lost revenues stemming from recent export rule changes concerning Nvidia's GPU sales to the Chinese market.</p></li><li><p>At current levels, Nvidia is likely still priced for perfect execution, with several industry-wide and company-specific risks on the horizon that could overshadow existing momentum in AI opportunities.</p></li></ul><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/85bfc955c6d409806eb1188e8f7286c5\" tg-width=\"750\" tg-height=\"490\"/></p><p>Despite another blockbuster quarter and a blowout end-of-year guidance, <strong>Nvidia Corporation</strong> (NASDAQ:NVDA) stock has yet to recapture its all-time high beyond $500 apiece reached on the eve of its F3Q24 earnings release. The investment community has largely attributed the paradox to the stock's rich valuation that is demanding perfect execution, which Nvidia has yet to deliver on. This is in line with the recent impact of Washington's updated rules on exports of advanced semiconductor technologies to China, which Nvidia expects to be an offsetting factor to surging demand for its products observed in other regions.</p><p>Yet the company's latest results and forward outlook continue to reinforce prospects for a strong demand environment bolstered by Nvidia's mission-critical role in supporting next-generation growth opportunities. Looking ahead, we view Nvidia's unmatched profit margins and growth opportunities relative to its semiconductor and tech peers in the $1+ trillion market cap range, enhanced by emerging AI demand, as key drivers for sustaining the stock's performance at current levels. However, the potential for further gains in the near term may rest on the extent and timing of which it can recapture opportunities in China and recuperate share gains in the region, as well as its ability to create incremental TAM-expanding opportunities in non-AI forays through continued innovation.</p><h2 id=\"id_927142661\">Multiple Demand Drivers to Sustain Growth</h2><p>After two consecutive quarters of breakneck growth, particularly in the data center segment amidst strong monetization of rising AI opportunities, there has been an emerging chorus of concerns over the trend's longer-term sustainability. In response, Nvidia CEO Jensen Huang has reaffirmed his confidence in the data center segment's ability to "grow through 2025," citing several drivers to said prospects.</p><h3 id=\"id_314486406\"><em>Accelerated Computing</em></h3><p>Specifically, Huang has attributed Nvidia's long-term growth trajectory to not only existing demand ensuing from the burst of AI workloads but also the broader transition from general purpose to accelerated computing in general. Considering the $1 trillion spent on the installation of CPU-based data centers over the past four years, Nvidia is well positioned to realize even greater opportunities stemming from the impending upgrade cycle to accelerated computing, spearheaded by the emergence of generative AI as the "primary workload of most of the world's data centers."</p><p>This is consistent with the gradual rise in prices of Nvidia's next-generation accelerators, which underscores the potential for a total addressable market, or TAM, that exceeds $1 trillion stemming from the transition to accelerated computing. Specifically, Nvidia's best-selling H100 GPU based on the Hopper architecture sells at an average price of about $30,000, despite surging towards $50,000 apiece earlier this year in secondary markets due to constrained supplies. And the impending H200 chip, which will be the first GPU to offer next-generation HBM3e - the latest high bandwidth memory processor capable of doubling inference speed relative to the H100 - is expected to cost as much as $40,000 apiece. This compares to the average $10,000 apiece for its predecessor, the A100 based on the Ampere architecture.</p><p>Both the DGX A100 and DGX H100 systems consist of eight A100 and eight H100 GPUs, respectively, despite the latter's capability of greater performance and inference speeds. Meanwhile, the latest DGX GH200 system strings together 256 GH200 Superchips capable of 500x more memory than the DGC H100 system, which, taken together, highlights the TAM-expanding opportunity stemming from the emerging era of accelerated computing.</p><p>The era of accelerated computing has also increased demand for next-generation data center GPUs from a wide range of verticals spanning cloud service providers ("CSPs"), enterprise customers, and sovereign cloud infrastructure, which highlights Nvidia's prospects for incremental market share gains in this foray. This is in line with primary CSPs spanning Amazon Web Services (AMZN), Google Cloud (GOOG, GOOGL), Microsoft Azure (MSFT), and Oracle Cloud's (ORCL) planned deployment of H200-based instances when the chip enters general availability next year, offering validation to the technology. Nvidia has also made initial shipments of the GH200 Grace Hopper Superchip to the Los Alamos National Lab, Swiss National Supercomputing Center, and the UK government to support the country's build of "the world's fastest AI supercomputers called Isambard-AI," underscoring the chipmaker's extensive reach into emerging opportunities across the public sector as well.</p><h3 id=\"id_4140002213\"><em>Competitive TCO</em></h3><p>Nvidia's latest innovations have also enabled a competitive TCO (or total cost of ownership) advantage. This continues to effectively address customers' demands for greater cost efficiencies in the process of improving performance needed to facilitate increasingly complex workloads.</p><p>For instance, Nvidia has been actively stepping up its complementary software capabilities in order to optimize TCO on its portfolio of hardware offerings for customers. This includes the recent introduction of TensorRT-LLM, which combined with the H100 GPU is capable of delivering up to 8x better performance relative to the A100 GPU alone on small language models, while also reducing TCO by as much as 5.3x and energy costs by as much as 5.6x.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/6b8085057033883912d3ab91c9a4a139\" tg-width=\"640\" tg-height=\"510\"/></p><p>developer.nvidia.com</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/e2d317ebb6ba98000a8ceadbfbc2b507\" tg-width=\"640\" tg-height=\"510\"/></p><p>developer.nvidia.com</p><p></p><p>The improvements are realized through a compilation of innovations spanning tensor parallelism, in-flight batching, and quantization:</p><ul style=\"\"><li><p><strong>Tensor parallelism</strong>: The feature enables inferencing - or the process of having a trained model make predictions on live data - at scale. Historically, developers have had to make manual adjustments to models in order to coordinate parallel execution across multiple GPUs and optimize LLM inference performance. However, tensor parallelism eliminates a large roadblock by allowing large advanced language models to "run in parallel across multiple GPUs connected through NVLink and across multiple servers without developer intervention or model changes," thus significantly reducing time and cost to deployment.</p></li><li><p><strong>In-flight batching</strong>: Inflight-batching is an "optimized scheduling technique" that allows an LLM to continuously execute multiple tasks simultaneously. The feature effectively optimizes GPU usage and minimizes idle capacity, which inadvertently improves TCO.</p></li></ul><blockquote><p>With in-flight batching, rather than waiting for the whole batch to finish before moving on to the next set of requests, the TensorRT-LLM runtime immediately evicts finished sequences from the batch. It then begins executing new requests while other requests are still in flight.</p><p><em>Source:</em> NVIDIA Technical Blog.</p></blockquote><ul style=\"\"><li><p><strong>Quantization</strong>: This is the process of reducing the memory in which the billions of model weight values within LLMs occupy in the GPU. This effectively lowers memory consumption in model inferencing on the same hardware, while enabling faster performance and higher accuracy. TensorRT-LLM automatically enables the quantization process, converting model weights into lower precision formats without any moderation required to the model code.</p></li></ul><p>Taken together, TensorRT-LLM troubleshoots the major optimization requirements demanded from enterprise and CSP deployments and continues to provide validation to the value proposition of the NVIDIA CUDA and hardware ecosystem. Coupled with compatibility with major LLM frameworks, such as <a href=\"https://laohu8.com/S/META\">Meta Platforms</a>' (META) Llama 2 and OpenAI's GPT-2 and GPT-3, which the most common/popular generative AI capabilities on built on, the latest introduction of TensorRT-LLM is expected to further reinforce Nvidia's capture of growth opportunities ahead. By improving TCO, Nvidia also plays a critical role in expanding the reach of generative AI, which in turn reinforces a sustained demand for flywheel for its offerings.</p><h3 id=\"id_4018034197\"><em>Full Stack Advantage</em></h3><p>Nvidia has also bolstered its full-stack advantage in recent years through the build-out of its complementary software-hardware ecosystem. This has been a key source of reinforcement for its sustained trajectory of growth in our opinion, as the strategy increases demand stickiness while also enabling monetization through every stage of the emerging AI opportunity, spanning hardware/infrastructure build-out, foundation model development, and consumer-facing application deployment.</p><p>On the hardware front, in addition to the demand for accelerators as discussed in the earlier section, Nvidia has also become a key beneficiary of increased demand for networking solutions. Specifically, the company has expanded the annualized revenue run rate for its networking business beyond $10 billion, bolstered by accelerating demand for its proprietary InfiniBand and NVLink networking technologies. As discussed in previous coverage, Microsoft has spent "several hundred million dollars" on networking hardware just to link up the "tens of thousands of [NVIDIA GPUs]" needed to support the supercomputer it uses for AI training and inference. This includes NVLink and "over 29,000 miles of InfiniBand cabling," highlighting the criticality of Nvidia's networking technology in enabling "scale and performance needed for training LLMs."</p><p>Meanwhile, on the software front, Nvidia is progressing toward a $1 billion annualized revenue run rate on related offerings by the end of fiscal 2024. These offerings include DGX Cloud, which facilitates compute demands from customers for training and inferencing complex generative AI workloads in the cloud, as well as NVIDIA AI Enterprise, which comprises comprehensive tools designed for streamlining the development and deployment of custom AI solutions for customers.</p><p>Taken together, the combined ecosystem spanning hardware, software and support services is expected to deepen Nvidia's reach into impending growth opportunities stemming from the advent of AI and beyond. It also offers a diversified revenue portfolio in our opinion, which mitigates Nvidia's exposure to the imminent downcycle for hardware demand in the future.</p><h2 id=\"id_2156071729\">Fundamental and Valuation Considerations</h2><p>The combination of rising accelerated computing adoption, a competitive TCO advantage, and Nvidia's comprehensive business model is expected to reinforce the chipmaker's sustained long-term growth trajectory. While the data center segment has been the key beneficiary of the demand drivers discussed in the foregoing analysis, they are also expected to unlock adjacent opportunities to Nvidia's other business avenues.</p><p>This is in line with industry views that the PC and smartphone markets are poised to benefit from an impending AI shift, which potentially harbingers the next growth cycle for Nvidia's core gaming business. The emergence of industrial generative AI is also poised to unlock the synergies of NVIDIA AI and NVIDIA Omniverse, reinforcing the longer-term growth story for the professional visualization segment. Meanwhile, the automotive segment is also expected to benefit from the ramp of AI-enabled ADAS / self-driving solutions reliant on Nvidia solutions such as NVIDIA DRIVE.</p><p>Adjusting our previous forecast for Nvidia's actual F3Q24 performance and forward prospects based on the foregoing discussion, we expect the company to finish the year with revenue growth of 119% y/y to $59.1 billion. This would imply total revenue of $20.2 billion for the current quarter, in line with management's guidance.</p><p>Considering the data center segment's positioning as the key beneficiary of existing AI tailwinds, as well as Nvidia's sales mix in recent quarters, relevant revenues are expected to expand by 205% y/y to $45.8 billion for fiscal 2024. Data center revenue growth is expected to remain in the high double-digit percentage range through fiscal 2026 and normalize at lower levels thereafter. This is in line with the impending market opportunities as outlined by management and as discussed in the foregoing analysis, as well as expectations for software and services opportunities being key growth drivers in the data center segment as the next cycle of hardware inventory digestion settles in.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/03d8082fddb03af3e9a1a430b07b28e7\" tg-width=\"640\" tg-height=\"95\"/></p><p>Author</p><p></p><p>The impending growth of higher-margin data center revenues is expected to bolster the sustainability of Nvidia's unmatched profit margins within the foreseeable future. We expect GAAP-based gross margins of 72% for fiscal 2024, with normalization towards the mid-70% range over the longer term as complementary high-margin software and services revenues continue to scale.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/1af0579208b7fbfde01be43e0f1e4332\" tg-width=\"640\" tg-height=\"155\"/></p><p>Author</p><p></p><p>Nvidia_-_Forecasted_Financial_Information.pdf</p><p>We are setting a base case price target of $448 for Nvidia stock.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7451868978a1d036432990a082d23a7\" tg-width=\"640\" tg-height=\"227\"/></p><p>Author</p><p></p><p>The base case price target is derived under the discounted cash flow ("DCF") analysis, which takes into consideration cash flow projections in line with the fundamental analysis discussed in the earlier section. The analysis applies an 11% WACC, in line with Nvidia's capital structure and risk profile relative to the estimated normalized benchmark Treasury rate of 4.5% under the "higher for longer" monetary policy stance. The analysis assumes a 5.6% perpetual growth rate on projected fiscal 2028 ETBIDA, which is in line with Nvidia's key demand drivers discussed in the foregoing analysis. The perpetual growth rate applied is equivalent to 3% applied on projected fiscal 2023 EBITDA when Nvidia's growth profile is expected to normalize in line with the pace of estimated economic expansion across its core operating regions.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/74c516a2bca077b6405b850663594fbb\" tg-width=\"640\" tg-height=\"366\"/></p><p>Author</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/5fc463296ff52490459d72d948b03ca9\" tg-width=\"640\" tg-height=\"255\"/></p><p>Author</p><p></p><h2 id=\"id_1427009141\">China Risks</h2><p>The re-emergence of China headwinds following Washington's updated export rules on advanced semiconductor technologies is likely a culprit of the Nvidia stock's recent post-earnings pullback. While AI tailwinds have largely taken precedence over Nvidia's exposure to China risks this year, the recent updates made to U.S. export rules have brought the relevant challenges back into focus.</p><p>Specifically, the new rules prevent shipments of the Nvidia A800 GPUs (a less powerful variant of the A100 GPUs tailored for the Chinese market to comply with previous U.S. export regulations) to China and require regulatory approval on technologies that fall below, but come close, to the new rules' controlled threshold. Management expects the updated policies, which took effect after October 17, to be a 20% to 25% headwind on data center sales in the current period, though surging demand from other regions is expected to more than compensate for the relevant loss of market share in China and other affected markets.</p><p>Based on a simple back-of-the-napkin calculation that adds 20% to our F4Q24 data center revenue estimate to reflect the scenario in which Nvidia continues to partake in China-related GPU demand while keeping all other growth and valuation assumptions unchanged, the ensuing fundamental prospects would yield a base case price target of $508.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/a57fd461bc4dc85de86dc28acfe32025\" tg-width=\"640\" tg-height=\"177\"/></p><p>Author</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/5e7655404cc5fc251fd8b760694b9497\" tg-width=\"640\" tg-height=\"178\"/></p><p>Author</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/018087386b6ff5412186e43045440790\" tg-width=\"640\" tg-height=\"407\"/></p><p>Author</p><p></p><p>This is in line with the stock's performance prior to Nvidia's latest earnings release, highlighting the markets expectations for perfect execution being priced in. As such, we expect Nvidia's eventual introduction of a regulation-compliant solution to be a key near-term driver of incremental upside potential in the stock. However, with "as many as 50 companies in China that are now working on technology that would compete with Nvidia's offerings," uncertainties remain on the timing and extent to which the U.S. chipmaker could recapture lost market share in the Chinese market. This is also in line with management's expectations for immaterial contributions from the Chinese market to data center segment sales in the current period.</p><h2 id=\"id_119364005\">Final Thoughts</h2><p>While AI-driven growth opportunities have ushered Nvidia's admission to the $1+ trillion market cap club, the stock has largely stayed rangebound in recent months. The stock has also showcased challenges in staying sustainably above the $500-level. Our analysis expects a compelling risk-reward set-up at our base case price target of $448, which considers ongoing macro-related multiple compression risks, uncertainties to the pace of cyclical recovery in consumer-facing verticals such as gaming and automotive, as well as regulatory headwinds facing Nvidia's key Chinese market. However, market confidence in the impending introduction of a rule-compliant data center GPU solution for the Chinese market remains one of the key near-term drivers for propelling the Nvidia stock back towards the $500-level heading into calendar 2024.</p></body></html>","source":"seekingalpha","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Can Nvidia Hit $500 Before 2024?</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 11px; color: #7E829C; margin: 0;line-height: 11px;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nCan Nvidia Hit $500 Before 2024?\n</h2>\n\n<h4 class=\"meta\">\n\n\n2023-12-02 07:58 GMT+8 <a href=https://seekingalpha.com/article/4655396-can-nvidia-hit-500-before-2024><strong>seekingalpha</strong></a>\n\n\n</h4>\n\n</header>\n<article>\n<div>\n<p>Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high ...</p>\n\n<a href=\"https://seekingalpha.com/article/4655396-can-nvidia-hit-500-before-2024\">Web Link</a>\n\n</div>\n\n\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"","relate_stocks":{"LU0256863811.USD":"ALLIANZ US EQUITY \"A\" INC","LU0980610538.SGD":"Natixis Harris Associates US Equity RA SGD-H","AMZN":"亚马逊","LU0211328371.USD":"TEMPLETON GLOBAL EQUITY INCOME \"A\" (MDIS) (USD) INC","IE00BLSP4239.USD":"Legg Mason ClearBridge - Tactical Dividend Income A Mdis USD Plus","META":"Meta Platforms, Inc.","LU0348723411.USD":"ALLIANZ GLOBAL HI-TECH GROWTH \"A\" (USD) INC","LU1201861249.SGD":"Natixis Harris Associates US Equity PA SGD-H","LU0433182093.SGD":"First Eagle Amundi International AS-C SGD","IE00BFSS7M15.SGD":"Janus Henderson Balanced A Acc SGD-H","IE0034235295.USD":"PINEBRIDGE GLOBAL DYNAMIC ASSET ALLOCATION \"A\" (USD) ACC","LU0130103400.USD":"Natixis Harris Associates Global Equity RA USD","LU0068578508.USD":"First Eagle Amundi International Cl AU-C USD","BK4516":"特朗普概念","LU0417517546.SGD":"Allianz US Equity Cl AT Acc SGD","BK4592":"伊斯兰概念","MSFT":"微软","BK4567":"ESG概念","LU0130102774.USD":"Natixis Harris Associates US Equity RA USD","BK4576":"AR","LU0011850046.USD":"贝莱德全球长线股票 A2 USD","ORCL":"甲骨文","NVDA":"英伟达","IE00BBT3K403.USD":"LEGG MASON CLEARBRIDGE TACTICAL DIVIDEND INCOME \"A(USD) ACC","LU0820561909.HKD":"ALLIANZ INCOME AND GROWTH \"AM\" (HKD) INC","BK4566":"资本集团","LU0127658192.USD":"EASTSPRING INVESTMENTS GLOBAL TECHNOLOGY \"A\" (USD) ACC","IE00BMPRXR70.SGD":"Neuberger Berman 5G Connectivity A Acc SGD-H","LU0310799852.SGD":"FTIF - Templeton Global Equity Income A MDIS SGD","LU0661504455.SGD":"Blackrock Global Equity Income A5 SGD-H","LU0171293334.USD":"贝莱德英国基金A2","BK4577":"网络游戏","BK4077":"互动媒体与服务","BK4579":"人工智能","LU0878866978.SGD":"First Eagle Amundi International AHS-QD SGD-H","LU0979878070.USD":"FULLERTON LUX FUNDS - ASIA ABSOLUTE ALPHA \"A\" (USD) ACC","BK4141":"半导体产品","BK4122":"互联网与直销零售","LU0276348264.USD":"THREADNEEDLE (LUX) GLOBAL DYNAMIC REAL RETURN\"AUP\" (USD) INC","LU1435385759.SGD":"Natixis Loomis Sayles US Growth Equity RA SGD-H","LU0061475181.USD":"THREADNEEDLE (LUX) AMERICAN \"AU\" (USD) ACC","BK4561":"索罗斯持仓","LU1429558221.USD":"Natixis Loomis Sayles US Growth Equity RA USD","LU0985489474.SGD":"First Eagle Amundi International AHS-C SGD-H","LU0708995401.HKD":"FRANKLIN U.S. OPPORTUNITIES \"A\" (HKD) ACC","BK4581":"高盛持仓","LU0354030511.USD":"ALLSPRING U.S. LARGE CAP GROWTH \"I\" (USD) ACC","LU0149725797.USD":"汇丰美国股市经济规模基金","LU0738911758.USD":"Blackrock Global Equity Income A6 USD","LU0354030438.USD":"富国美国大盘成长基金Cl A Acc"},"source_url":"https://seekingalpha.com/article/4655396-can-nvidia-hit-500-before-2024","is_english":true,"share_image_url":"https://static.laohu8.com/5a36db9d73b4222bc376d24ccc48c8a4","article_id":"2388542962","content_text":"Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high valuation.Nvidia stock's latest pullback is likely to adjust for lost revenues stemming from recent export rule changes concerning Nvidia's GPU sales to the Chinese market.At current levels, Nvidia is likely still priced for perfect execution, with several industry-wide and company-specific risks on the horizon that could overshadow existing momentum in AI opportunities.Despite another blockbuster quarter and a blowout end-of-year guidance, Nvidia Corporation (NASDAQ:NVDA) stock has yet to recapture its all-time high beyond $500 apiece reached on the eve of its F3Q24 earnings release. The investment community has largely attributed the paradox to the stock's rich valuation that is demanding perfect execution, which Nvidia has yet to deliver on. This is in line with the recent impact of Washington's updated rules on exports of advanced semiconductor technologies to China, which Nvidia expects to be an offsetting factor to surging demand for its products observed in other regions.Yet the company's latest results and forward outlook continue to reinforce prospects for a strong demand environment bolstered by Nvidia's mission-critical role in supporting next-generation growth opportunities. Looking ahead, we view Nvidia's unmatched profit margins and growth opportunities relative to its semiconductor and tech peers in the $1+ trillion market cap range, enhanced by emerging AI demand, as key drivers for sustaining the stock's performance at current levels. However, the potential for further gains in the near term may rest on the extent and timing of which it can recapture opportunities in China and recuperate share gains in the region, as well as its ability to create incremental TAM-expanding opportunities in non-AI forays through continued innovation.Multiple Demand Drivers to Sustain GrowthAfter two consecutive quarters of breakneck growth, particularly in the data center segment amidst strong monetization of rising AI opportunities, there has been an emerging chorus of concerns over the trend's longer-term sustainability. In response, Nvidia CEO Jensen Huang has reaffirmed his confidence in the data center segment's ability to \"grow through 2025,\" citing several drivers to said prospects.Accelerated ComputingSpecifically, Huang has attributed Nvidia's long-term growth trajectory to not only existing demand ensuing from the burst of AI workloads but also the broader transition from general purpose to accelerated computing in general. Considering the $1 trillion spent on the installation of CPU-based data centers over the past four years, Nvidia is well positioned to realize even greater opportunities stemming from the impending upgrade cycle to accelerated computing, spearheaded by the emergence of generative AI as the \"primary workload of most of the world's data centers.\"This is consistent with the gradual rise in prices of Nvidia's next-generation accelerators, which underscores the potential for a total addressable market, or TAM, that exceeds $1 trillion stemming from the transition to accelerated computing. Specifically, Nvidia's best-selling H100 GPU based on the Hopper architecture sells at an average price of about $30,000, despite surging towards $50,000 apiece earlier this year in secondary markets due to constrained supplies. And the impending H200 chip, which will be the first GPU to offer next-generation HBM3e - the latest high bandwidth memory processor capable of doubling inference speed relative to the H100 - is expected to cost as much as $40,000 apiece. This compares to the average $10,000 apiece for its predecessor, the A100 based on the Ampere architecture.Both the DGX A100 and DGX H100 systems consist of eight A100 and eight H100 GPUs, respectively, despite the latter's capability of greater performance and inference speeds. Meanwhile, the latest DGX GH200 system strings together 256 GH200 Superchips capable of 500x more memory than the DGC H100 system, which, taken together, highlights the TAM-expanding opportunity stemming from the emerging era of accelerated computing.The era of accelerated computing has also increased demand for next-generation data center GPUs from a wide range of verticals spanning cloud service providers (\"CSPs\"), enterprise customers, and sovereign cloud infrastructure, which highlights Nvidia's prospects for incremental market share gains in this foray. This is in line with primary CSPs spanning Amazon Web Services (AMZN), Google Cloud (GOOG, GOOGL), Microsoft Azure (MSFT), and Oracle Cloud's (ORCL) planned deployment of H200-based instances when the chip enters general availability next year, offering validation to the technology. Nvidia has also made initial shipments of the GH200 Grace Hopper Superchip to the Los Alamos National Lab, Swiss National Supercomputing Center, and the UK government to support the country's build of \"the world's fastest AI supercomputers called Isambard-AI,\" underscoring the chipmaker's extensive reach into emerging opportunities across the public sector as well.Competitive TCONvidia's latest innovations have also enabled a competitive TCO (or total cost of ownership) advantage. This continues to effectively address customers' demands for greater cost efficiencies in the process of improving performance needed to facilitate increasingly complex workloads.For instance, Nvidia has been actively stepping up its complementary software capabilities in order to optimize TCO on its portfolio of hardware offerings for customers. This includes the recent introduction of TensorRT-LLM, which combined with the H100 GPU is capable of delivering up to 8x better performance relative to the A100 GPU alone on small language models, while also reducing TCO by as much as 5.3x and energy costs by as much as 5.6x.developer.nvidia.comdeveloper.nvidia.comThe improvements are realized through a compilation of innovations spanning tensor parallelism, in-flight batching, and quantization:Tensor parallelism: The feature enables inferencing - or the process of having a trained model make predictions on live data - at scale. Historically, developers have had to make manual adjustments to models in order to coordinate parallel execution across multiple GPUs and optimize LLM inference performance. However, tensor parallelism eliminates a large roadblock by allowing large advanced language models to \"run in parallel across multiple GPUs connected through NVLink and across multiple servers without developer intervention or model changes,\" thus significantly reducing time and cost to deployment.In-flight batching: Inflight-batching is an \"optimized scheduling technique\" that allows an LLM to continuously execute multiple tasks simultaneously. The feature effectively optimizes GPU usage and minimizes idle capacity, which inadvertently improves TCO.With in-flight batching, rather than waiting for the whole batch to finish before moving on to the next set of requests, the TensorRT-LLM runtime immediately evicts finished sequences from the batch. It then begins executing new requests while other requests are still in flight.Source: NVIDIA Technical Blog.Quantization: This is the process of reducing the memory in which the billions of model weight values within LLMs occupy in the GPU. This effectively lowers memory consumption in model inferencing on the same hardware, while enabling faster performance and higher accuracy. TensorRT-LLM automatically enables the quantization process, converting model weights into lower precision formats without any moderation required to the model code.Taken together, TensorRT-LLM troubleshoots the major optimization requirements demanded from enterprise and CSP deployments and continues to provide validation to the value proposition of the NVIDIA CUDA and hardware ecosystem. Coupled with compatibility with major LLM frameworks, such as Meta Platforms' (META) Llama 2 and OpenAI's GPT-2 and GPT-3, which the most common/popular generative AI capabilities on built on, the latest introduction of TensorRT-LLM is expected to further reinforce Nvidia's capture of growth opportunities ahead. By improving TCO, Nvidia also plays a critical role in expanding the reach of generative AI, which in turn reinforces a sustained demand for flywheel for its offerings.Full Stack AdvantageNvidia has also bolstered its full-stack advantage in recent years through the build-out of its complementary software-hardware ecosystem. This has been a key source of reinforcement for its sustained trajectory of growth in our opinion, as the strategy increases demand stickiness while also enabling monetization through every stage of the emerging AI opportunity, spanning hardware/infrastructure build-out, foundation model development, and consumer-facing application deployment.On the hardware front, in addition to the demand for accelerators as discussed in the earlier section, Nvidia has also become a key beneficiary of increased demand for networking solutions. Specifically, the company has expanded the annualized revenue run rate for its networking business beyond $10 billion, bolstered by accelerating demand for its proprietary InfiniBand and NVLink networking technologies. As discussed in previous coverage, Microsoft has spent \"several hundred million dollars\" on networking hardware just to link up the \"tens of thousands of [NVIDIA GPUs]\" needed to support the supercomputer it uses for AI training and inference. This includes NVLink and \"over 29,000 miles of InfiniBand cabling,\" highlighting the criticality of Nvidia's networking technology in enabling \"scale and performance needed for training LLMs.\"Meanwhile, on the software front, Nvidia is progressing toward a $1 billion annualized revenue run rate on related offerings by the end of fiscal 2024. These offerings include DGX Cloud, which facilitates compute demands from customers for training and inferencing complex generative AI workloads in the cloud, as well as NVIDIA AI Enterprise, which comprises comprehensive tools designed for streamlining the development and deployment of custom AI solutions for customers.Taken together, the combined ecosystem spanning hardware, software and support services is expected to deepen Nvidia's reach into impending growth opportunities stemming from the advent of AI and beyond. It also offers a diversified revenue portfolio in our opinion, which mitigates Nvidia's exposure to the imminent downcycle for hardware demand in the future.Fundamental and Valuation ConsiderationsThe combination of rising accelerated computing adoption, a competitive TCO advantage, and Nvidia's comprehensive business model is expected to reinforce the chipmaker's sustained long-term growth trajectory. While the data center segment has been the key beneficiary of the demand drivers discussed in the foregoing analysis, they are also expected to unlock adjacent opportunities to Nvidia's other business avenues.This is in line with industry views that the PC and smartphone markets are poised to benefit from an impending AI shift, which potentially harbingers the next growth cycle for Nvidia's core gaming business. The emergence of industrial generative AI is also poised to unlock the synergies of NVIDIA AI and NVIDIA Omniverse, reinforcing the longer-term growth story for the professional visualization segment. Meanwhile, the automotive segment is also expected to benefit from the ramp of AI-enabled ADAS / self-driving solutions reliant on Nvidia solutions such as NVIDIA DRIVE.Adjusting our previous forecast for Nvidia's actual F3Q24 performance and forward prospects based on the foregoing discussion, we expect the company to finish the year with revenue growth of 119% y/y to $59.1 billion. This would imply total revenue of $20.2 billion for the current quarter, in line with management's guidance.Considering the data center segment's positioning as the key beneficiary of existing AI tailwinds, as well as Nvidia's sales mix in recent quarters, relevant revenues are expected to expand by 205% y/y to $45.8 billion for fiscal 2024. Data center revenue growth is expected to remain in the high double-digit percentage range through fiscal 2026 and normalize at lower levels thereafter. This is in line with the impending market opportunities as outlined by management and as discussed in the foregoing analysis, as well as expectations for software and services opportunities being key growth drivers in the data center segment as the next cycle of hardware inventory digestion settles in.AuthorThe impending growth of higher-margin data center revenues is expected to bolster the sustainability of Nvidia's unmatched profit margins within the foreseeable future. We expect GAAP-based gross margins of 72% for fiscal 2024, with normalization towards the mid-70% range over the longer term as complementary high-margin software and services revenues continue to scale.AuthorNvidia_-_Forecasted_Financial_Information.pdfWe are setting a base case price target of $448 for Nvidia stock.AuthorThe base case price target is derived under the discounted cash flow (\"DCF\") analysis, which takes into consideration cash flow projections in line with the fundamental analysis discussed in the earlier section. The analysis applies an 11% WACC, in line with Nvidia's capital structure and risk profile relative to the estimated normalized benchmark Treasury rate of 4.5% under the \"higher for longer\" monetary policy stance. The analysis assumes a 5.6% perpetual growth rate on projected fiscal 2028 ETBIDA, which is in line with Nvidia's key demand drivers discussed in the foregoing analysis. The perpetual growth rate applied is equivalent to 3% applied on projected fiscal 2023 EBITDA when Nvidia's growth profile is expected to normalize in line with the pace of estimated economic expansion across its core operating regions.AuthorAuthorChina RisksThe re-emergence of China headwinds following Washington's updated export rules on advanced semiconductor technologies is likely a culprit of the Nvidia stock's recent post-earnings pullback. While AI tailwinds have largely taken precedence over Nvidia's exposure to China risks this year, the recent updates made to U.S. export rules have brought the relevant challenges back into focus.Specifically, the new rules prevent shipments of the Nvidia A800 GPUs (a less powerful variant of the A100 GPUs tailored for the Chinese market to comply with previous U.S. export regulations) to China and require regulatory approval on technologies that fall below, but come close, to the new rules' controlled threshold. Management expects the updated policies, which took effect after October 17, to be a 20% to 25% headwind on data center sales in the current period, though surging demand from other regions is expected to more than compensate for the relevant loss of market share in China and other affected markets.Based on a simple back-of-the-napkin calculation that adds 20% to our F4Q24 data center revenue estimate to reflect the scenario in which Nvidia continues to partake in China-related GPU demand while keeping all other growth and valuation assumptions unchanged, the ensuing fundamental prospects would yield a base case price target of $508.AuthorAuthorAuthorThis is in line with the stock's performance prior to Nvidia's latest earnings release, highlighting the markets expectations for perfect execution being priced in. As such, we expect Nvidia's eventual introduction of a regulation-compliant solution to be a key near-term driver of incremental upside potential in the stock. However, with \"as many as 50 companies in China that are now working on technology that would compete with Nvidia's offerings,\" uncertainties remain on the timing and extent to which the U.S. chipmaker could recapture lost market share in the Chinese market. This is also in line with management's expectations for immaterial contributions from the Chinese market to data center segment sales in the current period.Final ThoughtsWhile AI-driven growth opportunities have ushered Nvidia's admission to the $1+ trillion market cap club, the stock has largely stayed rangebound in recent months. The stock has also showcased challenges in staying sustainably above the $500-level. Our analysis expects a compelling risk-reward set-up at our base case price target of $448, which considers ongoing macro-related multiple compression risks, uncertainties to the pace of cyclical recovery in consumer-facing verticals such as gaming and automotive, as well as regulatory headwinds facing Nvidia's key Chinese market. However, market confidence in the impending introduction of a rule-compliant data center GPU solution for the Chinese market remains one of the key near-term drivers for propelling the Nvidia stock back towards the $500-level heading into calendar 2024.","news_type":1},"isVote":1,"tweetType":1,"viewCount":216,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"hots":[{"id":247736566886616,"gmtCreate":1701520484799,"gmtModify":1701526331724,"author":{"id":"4164615851577982","authorId":"4164615851577982","name":"SUYADI","avatar":"https://community-static.tradeup.com/news/default-avatar.jpg","crmLevel":1,"crmLevelSwitch":0,"followedFlag":false,"idStr":"4164615851577982","authorIdStr":"4164615851577982"},"themes":[],"htmlText":"I love the produc Nvidia","listText":"I love the produc Nvidia","text":"I love the produc Nvidia","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":6,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/247736566886616","repostId":"2388542962","repostType":4,"repost":{"id":"2388542962","kind":"highlight","pubTimestamp":1701475120,"share":"https://ttm.financial/m/news/2388542962?lang=&edition=fundamental","pubTime":"2023-12-02 07:58","market":"us","language":"en","title":"Can Nvidia Hit $500 Before 2024?","url":"https://stock-news.laohu8.com/highlight/detail?id=2388542962","media":"seekingalpha","summary":"Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high valuati","content":"<html><head></head><body><ul style=\"\"><li><p>Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high valuation.</p></li><li><p>Nvidia stock's latest pullback is likely to adjust for lost revenues stemming from recent export rule changes concerning Nvidia's GPU sales to the Chinese market.</p></li><li><p>At current levels, Nvidia is likely still priced for perfect execution, with several industry-wide and company-specific risks on the horizon that could overshadow existing momentum in AI opportunities.</p></li></ul><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/85bfc955c6d409806eb1188e8f7286c5\" tg-width=\"750\" tg-height=\"490\"/></p><p>Despite another blockbuster quarter and a blowout end-of-year guidance, <strong>Nvidia Corporation</strong> (NASDAQ:NVDA) stock has yet to recapture its all-time high beyond $500 apiece reached on the eve of its F3Q24 earnings release. The investment community has largely attributed the paradox to the stock's rich valuation that is demanding perfect execution, which Nvidia has yet to deliver on. This is in line with the recent impact of Washington's updated rules on exports of advanced semiconductor technologies to China, which Nvidia expects to be an offsetting factor to surging demand for its products observed in other regions.</p><p>Yet the company's latest results and forward outlook continue to reinforce prospects for a strong demand environment bolstered by Nvidia's mission-critical role in supporting next-generation growth opportunities. Looking ahead, we view Nvidia's unmatched profit margins and growth opportunities relative to its semiconductor and tech peers in the $1+ trillion market cap range, enhanced by emerging AI demand, as key drivers for sustaining the stock's performance at current levels. However, the potential for further gains in the near term may rest on the extent and timing of which it can recapture opportunities in China and recuperate share gains in the region, as well as its ability to create incremental TAM-expanding opportunities in non-AI forays through continued innovation.</p><h2 id=\"id_927142661\">Multiple Demand Drivers to Sustain Growth</h2><p>After two consecutive quarters of breakneck growth, particularly in the data center segment amidst strong monetization of rising AI opportunities, there has been an emerging chorus of concerns over the trend's longer-term sustainability. In response, Nvidia CEO Jensen Huang has reaffirmed his confidence in the data center segment's ability to "grow through 2025," citing several drivers to said prospects.</p><h3 id=\"id_314486406\"><em>Accelerated Computing</em></h3><p>Specifically, Huang has attributed Nvidia's long-term growth trajectory to not only existing demand ensuing from the burst of AI workloads but also the broader transition from general purpose to accelerated computing in general. Considering the $1 trillion spent on the installation of CPU-based data centers over the past four years, Nvidia is well positioned to realize even greater opportunities stemming from the impending upgrade cycle to accelerated computing, spearheaded by the emergence of generative AI as the "primary workload of most of the world's data centers."</p><p>This is consistent with the gradual rise in prices of Nvidia's next-generation accelerators, which underscores the potential for a total addressable market, or TAM, that exceeds $1 trillion stemming from the transition to accelerated computing. Specifically, Nvidia's best-selling H100 GPU based on the Hopper architecture sells at an average price of about $30,000, despite surging towards $50,000 apiece earlier this year in secondary markets due to constrained supplies. And the impending H200 chip, which will be the first GPU to offer next-generation HBM3e - the latest high bandwidth memory processor capable of doubling inference speed relative to the H100 - is expected to cost as much as $40,000 apiece. This compares to the average $10,000 apiece for its predecessor, the A100 based on the Ampere architecture.</p><p>Both the DGX A100 and DGX H100 systems consist of eight A100 and eight H100 GPUs, respectively, despite the latter's capability of greater performance and inference speeds. Meanwhile, the latest DGX GH200 system strings together 256 GH200 Superchips capable of 500x more memory than the DGC H100 system, which, taken together, highlights the TAM-expanding opportunity stemming from the emerging era of accelerated computing.</p><p>The era of accelerated computing has also increased demand for next-generation data center GPUs from a wide range of verticals spanning cloud service providers ("CSPs"), enterprise customers, and sovereign cloud infrastructure, which highlights Nvidia's prospects for incremental market share gains in this foray. This is in line with primary CSPs spanning Amazon Web Services (AMZN), Google Cloud (GOOG, GOOGL), Microsoft Azure (MSFT), and Oracle Cloud's (ORCL) planned deployment of H200-based instances when the chip enters general availability next year, offering validation to the technology. Nvidia has also made initial shipments of the GH200 Grace Hopper Superchip to the Los Alamos National Lab, Swiss National Supercomputing Center, and the UK government to support the country's build of "the world's fastest AI supercomputers called Isambard-AI," underscoring the chipmaker's extensive reach into emerging opportunities across the public sector as well.</p><h3 id=\"id_4140002213\"><em>Competitive TCO</em></h3><p>Nvidia's latest innovations have also enabled a competitive TCO (or total cost of ownership) advantage. This continues to effectively address customers' demands for greater cost efficiencies in the process of improving performance needed to facilitate increasingly complex workloads.</p><p>For instance, Nvidia has been actively stepping up its complementary software capabilities in order to optimize TCO on its portfolio of hardware offerings for customers. This includes the recent introduction of TensorRT-LLM, which combined with the H100 GPU is capable of delivering up to 8x better performance relative to the A100 GPU alone on small language models, while also reducing TCO by as much as 5.3x and energy costs by as much as 5.6x.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/6b8085057033883912d3ab91c9a4a139\" tg-width=\"640\" tg-height=\"510\"/></p><p>developer.nvidia.com</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/e2d317ebb6ba98000a8ceadbfbc2b507\" tg-width=\"640\" tg-height=\"510\"/></p><p>developer.nvidia.com</p><p></p><p>The improvements are realized through a compilation of innovations spanning tensor parallelism, in-flight batching, and quantization:</p><ul style=\"\"><li><p><strong>Tensor parallelism</strong>: The feature enables inferencing - or the process of having a trained model make predictions on live data - at scale. Historically, developers have had to make manual adjustments to models in order to coordinate parallel execution across multiple GPUs and optimize LLM inference performance. However, tensor parallelism eliminates a large roadblock by allowing large advanced language models to "run in parallel across multiple GPUs connected through NVLink and across multiple servers without developer intervention or model changes," thus significantly reducing time and cost to deployment.</p></li><li><p><strong>In-flight batching</strong>: Inflight-batching is an "optimized scheduling technique" that allows an LLM to continuously execute multiple tasks simultaneously. The feature effectively optimizes GPU usage and minimizes idle capacity, which inadvertently improves TCO.</p></li></ul><blockquote><p>With in-flight batching, rather than waiting for the whole batch to finish before moving on to the next set of requests, the TensorRT-LLM runtime immediately evicts finished sequences from the batch. It then begins executing new requests while other requests are still in flight.</p><p><em>Source:</em> NVIDIA Technical Blog.</p></blockquote><ul style=\"\"><li><p><strong>Quantization</strong>: This is the process of reducing the memory in which the billions of model weight values within LLMs occupy in the GPU. This effectively lowers memory consumption in model inferencing on the same hardware, while enabling faster performance and higher accuracy. TensorRT-LLM automatically enables the quantization process, converting model weights into lower precision formats without any moderation required to the model code.</p></li></ul><p>Taken together, TensorRT-LLM troubleshoots the major optimization requirements demanded from enterprise and CSP deployments and continues to provide validation to the value proposition of the NVIDIA CUDA and hardware ecosystem. Coupled with compatibility with major LLM frameworks, such as <a href=\"https://laohu8.com/S/META\">Meta Platforms</a>' (META) Llama 2 and OpenAI's GPT-2 and GPT-3, which the most common/popular generative AI capabilities on built on, the latest introduction of TensorRT-LLM is expected to further reinforce Nvidia's capture of growth opportunities ahead. By improving TCO, Nvidia also plays a critical role in expanding the reach of generative AI, which in turn reinforces a sustained demand for flywheel for its offerings.</p><h3 id=\"id_4018034197\"><em>Full Stack Advantage</em></h3><p>Nvidia has also bolstered its full-stack advantage in recent years through the build-out of its complementary software-hardware ecosystem. This has been a key source of reinforcement for its sustained trajectory of growth in our opinion, as the strategy increases demand stickiness while also enabling monetization through every stage of the emerging AI opportunity, spanning hardware/infrastructure build-out, foundation model development, and consumer-facing application deployment.</p><p>On the hardware front, in addition to the demand for accelerators as discussed in the earlier section, Nvidia has also become a key beneficiary of increased demand for networking solutions. Specifically, the company has expanded the annualized revenue run rate for its networking business beyond $10 billion, bolstered by accelerating demand for its proprietary InfiniBand and NVLink networking technologies. As discussed in previous coverage, Microsoft has spent "several hundred million dollars" on networking hardware just to link up the "tens of thousands of [NVIDIA GPUs]" needed to support the supercomputer it uses for AI training and inference. This includes NVLink and "over 29,000 miles of InfiniBand cabling," highlighting the criticality of Nvidia's networking technology in enabling "scale and performance needed for training LLMs."</p><p>Meanwhile, on the software front, Nvidia is progressing toward a $1 billion annualized revenue run rate on related offerings by the end of fiscal 2024. These offerings include DGX Cloud, which facilitates compute demands from customers for training and inferencing complex generative AI workloads in the cloud, as well as NVIDIA AI Enterprise, which comprises comprehensive tools designed for streamlining the development and deployment of custom AI solutions for customers.</p><p>Taken together, the combined ecosystem spanning hardware, software and support services is expected to deepen Nvidia's reach into impending growth opportunities stemming from the advent of AI and beyond. It also offers a diversified revenue portfolio in our opinion, which mitigates Nvidia's exposure to the imminent downcycle for hardware demand in the future.</p><h2 id=\"id_2156071729\">Fundamental and Valuation Considerations</h2><p>The combination of rising accelerated computing adoption, a competitive TCO advantage, and Nvidia's comprehensive business model is expected to reinforce the chipmaker's sustained long-term growth trajectory. While the data center segment has been the key beneficiary of the demand drivers discussed in the foregoing analysis, they are also expected to unlock adjacent opportunities to Nvidia's other business avenues.</p><p>This is in line with industry views that the PC and smartphone markets are poised to benefit from an impending AI shift, which potentially harbingers the next growth cycle for Nvidia's core gaming business. The emergence of industrial generative AI is also poised to unlock the synergies of NVIDIA AI and NVIDIA Omniverse, reinforcing the longer-term growth story for the professional visualization segment. Meanwhile, the automotive segment is also expected to benefit from the ramp of AI-enabled ADAS / self-driving solutions reliant on Nvidia solutions such as NVIDIA DRIVE.</p><p>Adjusting our previous forecast for Nvidia's actual F3Q24 performance and forward prospects based on the foregoing discussion, we expect the company to finish the year with revenue growth of 119% y/y to $59.1 billion. This would imply total revenue of $20.2 billion for the current quarter, in line with management's guidance.</p><p>Considering the data center segment's positioning as the key beneficiary of existing AI tailwinds, as well as Nvidia's sales mix in recent quarters, relevant revenues are expected to expand by 205% y/y to $45.8 billion for fiscal 2024. Data center revenue growth is expected to remain in the high double-digit percentage range through fiscal 2026 and normalize at lower levels thereafter. This is in line with the impending market opportunities as outlined by management and as discussed in the foregoing analysis, as well as expectations for software and services opportunities being key growth drivers in the data center segment as the next cycle of hardware inventory digestion settles in.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/03d8082fddb03af3e9a1a430b07b28e7\" tg-width=\"640\" tg-height=\"95\"/></p><p>Author</p><p></p><p>The impending growth of higher-margin data center revenues is expected to bolster the sustainability of Nvidia's unmatched profit margins within the foreseeable future. We expect GAAP-based gross margins of 72% for fiscal 2024, with normalization towards the mid-70% range over the longer term as complementary high-margin software and services revenues continue to scale.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/1af0579208b7fbfde01be43e0f1e4332\" tg-width=\"640\" tg-height=\"155\"/></p><p>Author</p><p></p><p>Nvidia_-_Forecasted_Financial_Information.pdf</p><p>We are setting a base case price target of $448 for Nvidia stock.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/d7451868978a1d036432990a082d23a7\" tg-width=\"640\" tg-height=\"227\"/></p><p>Author</p><p></p><p>The base case price target is derived under the discounted cash flow ("DCF") analysis, which takes into consideration cash flow projections in line with the fundamental analysis discussed in the earlier section. The analysis applies an 11% WACC, in line with Nvidia's capital structure and risk profile relative to the estimated normalized benchmark Treasury rate of 4.5% under the "higher for longer" monetary policy stance. The analysis assumes a 5.6% perpetual growth rate on projected fiscal 2028 ETBIDA, which is in line with Nvidia's key demand drivers discussed in the foregoing analysis. The perpetual growth rate applied is equivalent to 3% applied on projected fiscal 2023 EBITDA when Nvidia's growth profile is expected to normalize in line with the pace of estimated economic expansion across its core operating regions.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/74c516a2bca077b6405b850663594fbb\" tg-width=\"640\" tg-height=\"366\"/></p><p>Author</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/5fc463296ff52490459d72d948b03ca9\" tg-width=\"640\" tg-height=\"255\"/></p><p>Author</p><p></p><h2 id=\"id_1427009141\">China Risks</h2><p>The re-emergence of China headwinds following Washington's updated export rules on advanced semiconductor technologies is likely a culprit of the Nvidia stock's recent post-earnings pullback. While AI tailwinds have largely taken precedence over Nvidia's exposure to China risks this year, the recent updates made to U.S. export rules have brought the relevant challenges back into focus.</p><p>Specifically, the new rules prevent shipments of the Nvidia A800 GPUs (a less powerful variant of the A100 GPUs tailored for the Chinese market to comply with previous U.S. export regulations) to China and require regulatory approval on technologies that fall below, but come close, to the new rules' controlled threshold. Management expects the updated policies, which took effect after October 17, to be a 20% to 25% headwind on data center sales in the current period, though surging demand from other regions is expected to more than compensate for the relevant loss of market share in China and other affected markets.</p><p>Based on a simple back-of-the-napkin calculation that adds 20% to our F4Q24 data center revenue estimate to reflect the scenario in which Nvidia continues to partake in China-related GPU demand while keeping all other growth and valuation assumptions unchanged, the ensuing fundamental prospects would yield a base case price target of $508.</p><p></p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/a57fd461bc4dc85de86dc28acfe32025\" tg-width=\"640\" tg-height=\"177\"/></p><p>Author</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/5e7655404cc5fc251fd8b760694b9497\" tg-width=\"640\" tg-height=\"178\"/></p><p>Author</p><p class=\"t-img-caption\"><img src=\"https://static.tigerbbs.com/018087386b6ff5412186e43045440790\" tg-width=\"640\" tg-height=\"407\"/></p><p>Author</p><p></p><p>This is in line with the stock's performance prior to Nvidia's latest earnings release, highlighting the markets expectations for perfect execution being priced in. As such, we expect Nvidia's eventual introduction of a regulation-compliant solution to be a key near-term driver of incremental upside potential in the stock. However, with "as many as 50 companies in China that are now working on technology that would compete with Nvidia's offerings," uncertainties remain on the timing and extent to which the U.S. chipmaker could recapture lost market share in the Chinese market. This is also in line with management's expectations for immaterial contributions from the Chinese market to data center segment sales in the current period.</p><h2 id=\"id_119364005\">Final Thoughts</h2><p>While AI-driven growth opportunities have ushered Nvidia's admission to the $1+ trillion market cap club, the stock has largely stayed rangebound in recent months. The stock has also showcased challenges in staying sustainably above the $500-level. Our analysis expects a compelling risk-reward set-up at our base case price target of $448, which considers ongoing macro-related multiple compression risks, uncertainties to the pace of cyclical recovery in consumer-facing verticals such as gaming and automotive, as well as regulatory headwinds facing Nvidia's key Chinese market. However, market confidence in the impending introduction of a rule-compliant data center GPU solution for the Chinese market remains one of the key near-term drivers for propelling the Nvidia stock back towards the $500-level heading into calendar 2024.</p></body></html>","source":"seekingalpha","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>Can Nvidia Hit $500 Before 2024?</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 11px; color: #7E829C; margin: 0;line-height: 11px;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nCan Nvidia Hit $500 Before 2024?\n</h2>\n\n<h4 class=\"meta\">\n\n\n2023-12-02 07:58 GMT+8 <a href=https://seekingalpha.com/article/4655396-can-nvidia-hit-500-before-2024><strong>seekingalpha</strong></a>\n\n\n</h4>\n\n</header>\n<article>\n<div>\n<p>Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high ...</p>\n\n<a href=\"https://seekingalpha.com/article/4655396-can-nvidia-hit-500-before-2024\">Web Link</a>\n\n</div>\n\n\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"","relate_stocks":{"LU0256863811.USD":"ALLIANZ US EQUITY \"A\" INC","LU0980610538.SGD":"Natixis Harris Associates US Equity RA SGD-H","AMZN":"亚马逊","LU0211328371.USD":"TEMPLETON GLOBAL EQUITY INCOME \"A\" (MDIS) (USD) INC","IE00BLSP4239.USD":"Legg Mason ClearBridge - Tactical Dividend Income A Mdis USD Plus","META":"Meta Platforms, Inc.","LU0348723411.USD":"ALLIANZ GLOBAL HI-TECH GROWTH \"A\" (USD) INC","LU1201861249.SGD":"Natixis Harris Associates US Equity PA SGD-H","LU0433182093.SGD":"First Eagle Amundi International AS-C SGD","IE00BFSS7M15.SGD":"Janus Henderson Balanced A Acc SGD-H","IE0034235295.USD":"PINEBRIDGE GLOBAL DYNAMIC ASSET ALLOCATION \"A\" (USD) ACC","LU0130103400.USD":"Natixis Harris Associates Global Equity RA USD","LU0068578508.USD":"First Eagle Amundi International Cl AU-C USD","BK4516":"特朗普概念","LU0417517546.SGD":"Allianz US Equity Cl AT Acc SGD","BK4592":"伊斯兰概念","MSFT":"微软","BK4567":"ESG概念","LU0130102774.USD":"Natixis Harris Associates US Equity RA USD","BK4576":"AR","LU0011850046.USD":"贝莱德全球长线股票 A2 USD","ORCL":"甲骨文","NVDA":"英伟达","IE00BBT3K403.USD":"LEGG MASON CLEARBRIDGE TACTICAL DIVIDEND INCOME \"A(USD) ACC","LU0820561909.HKD":"ALLIANZ INCOME AND GROWTH \"AM\" (HKD) INC","BK4566":"资本集团","LU0127658192.USD":"EASTSPRING INVESTMENTS GLOBAL TECHNOLOGY \"A\" (USD) ACC","IE00BMPRXR70.SGD":"Neuberger Berman 5G Connectivity A Acc SGD-H","LU0310799852.SGD":"FTIF - Templeton Global Equity Income A MDIS SGD","LU0661504455.SGD":"Blackrock Global Equity Income A5 SGD-H","LU0171293334.USD":"贝莱德英国基金A2","BK4577":"网络游戏","BK4077":"互动媒体与服务","BK4579":"人工智能","LU0878866978.SGD":"First Eagle Amundi International AHS-QD SGD-H","LU0979878070.USD":"FULLERTON LUX FUNDS - ASIA ABSOLUTE ALPHA \"A\" (USD) ACC","BK4141":"半导体产品","BK4122":"互联网与直销零售","LU0276348264.USD":"THREADNEEDLE (LUX) GLOBAL DYNAMIC REAL RETURN\"AUP\" (USD) INC","LU1435385759.SGD":"Natixis Loomis Sayles US Growth Equity RA SGD-H","LU0061475181.USD":"THREADNEEDLE (LUX) AMERICAN \"AU\" (USD) ACC","BK4561":"索罗斯持仓","LU1429558221.USD":"Natixis Loomis Sayles US Growth Equity RA USD","LU0985489474.SGD":"First Eagle Amundi International AHS-C SGD-H","LU0708995401.HKD":"FRANKLIN U.S. OPPORTUNITIES \"A\" (HKD) ACC","BK4581":"高盛持仓","LU0354030511.USD":"ALLSPRING U.S. LARGE CAP GROWTH \"I\" (USD) ACC","LU0149725797.USD":"汇丰美国股市经济规模基金","LU0738911758.USD":"Blackrock Global Equity Income A6 USD","LU0354030438.USD":"富国美国大盘成长基金Cl A Acc"},"source_url":"https://seekingalpha.com/article/4655396-can-nvidia-hit-500-before-2024","is_english":true,"share_image_url":"https://static.laohu8.com/5a36db9d73b4222bc376d24ccc48c8a4","article_id":"2388542962","content_text":"Despite Nvidia Corporation reporting blockbuster fiscal Q3 results and stellar guidance that reinforces the staying power of emerging AI tailwinds, the stock has slipped from its all-time high valuation.Nvidia stock's latest pullback is likely to adjust for lost revenues stemming from recent export rule changes concerning Nvidia's GPU sales to the Chinese market.At current levels, Nvidia is likely still priced for perfect execution, with several industry-wide and company-specific risks on the horizon that could overshadow existing momentum in AI opportunities.Despite another blockbuster quarter and a blowout end-of-year guidance, Nvidia Corporation (NASDAQ:NVDA) stock has yet to recapture its all-time high beyond $500 apiece reached on the eve of its F3Q24 earnings release. The investment community has largely attributed the paradox to the stock's rich valuation that is demanding perfect execution, which Nvidia has yet to deliver on. This is in line with the recent impact of Washington's updated rules on exports of advanced semiconductor technologies to China, which Nvidia expects to be an offsetting factor to surging demand for its products observed in other regions.Yet the company's latest results and forward outlook continue to reinforce prospects for a strong demand environment bolstered by Nvidia's mission-critical role in supporting next-generation growth opportunities. Looking ahead, we view Nvidia's unmatched profit margins and growth opportunities relative to its semiconductor and tech peers in the $1+ trillion market cap range, enhanced by emerging AI demand, as key drivers for sustaining the stock's performance at current levels. However, the potential for further gains in the near term may rest on the extent and timing of which it can recapture opportunities in China and recuperate share gains in the region, as well as its ability to create incremental TAM-expanding opportunities in non-AI forays through continued innovation.Multiple Demand Drivers to Sustain GrowthAfter two consecutive quarters of breakneck growth, particularly in the data center segment amidst strong monetization of rising AI opportunities, there has been an emerging chorus of concerns over the trend's longer-term sustainability. In response, Nvidia CEO Jensen Huang has reaffirmed his confidence in the data center segment's ability to \"grow through 2025,\" citing several drivers to said prospects.Accelerated ComputingSpecifically, Huang has attributed Nvidia's long-term growth trajectory to not only existing demand ensuing from the burst of AI workloads but also the broader transition from general purpose to accelerated computing in general. Considering the $1 trillion spent on the installation of CPU-based data centers over the past four years, Nvidia is well positioned to realize even greater opportunities stemming from the impending upgrade cycle to accelerated computing, spearheaded by the emergence of generative AI as the \"primary workload of most of the world's data centers.\"This is consistent with the gradual rise in prices of Nvidia's next-generation accelerators, which underscores the potential for a total addressable market, or TAM, that exceeds $1 trillion stemming from the transition to accelerated computing. Specifically, Nvidia's best-selling H100 GPU based on the Hopper architecture sells at an average price of about $30,000, despite surging towards $50,000 apiece earlier this year in secondary markets due to constrained supplies. And the impending H200 chip, which will be the first GPU to offer next-generation HBM3e - the latest high bandwidth memory processor capable of doubling inference speed relative to the H100 - is expected to cost as much as $40,000 apiece. This compares to the average $10,000 apiece for its predecessor, the A100 based on the Ampere architecture.Both the DGX A100 and DGX H100 systems consist of eight A100 and eight H100 GPUs, respectively, despite the latter's capability of greater performance and inference speeds. Meanwhile, the latest DGX GH200 system strings together 256 GH200 Superchips capable of 500x more memory than the DGC H100 system, which, taken together, highlights the TAM-expanding opportunity stemming from the emerging era of accelerated computing.The era of accelerated computing has also increased demand for next-generation data center GPUs from a wide range of verticals spanning cloud service providers (\"CSPs\"), enterprise customers, and sovereign cloud infrastructure, which highlights Nvidia's prospects for incremental market share gains in this foray. This is in line with primary CSPs spanning Amazon Web Services (AMZN), Google Cloud (GOOG, GOOGL), Microsoft Azure (MSFT), and Oracle Cloud's (ORCL) planned deployment of H200-based instances when the chip enters general availability next year, offering validation to the technology. Nvidia has also made initial shipments of the GH200 Grace Hopper Superchip to the Los Alamos National Lab, Swiss National Supercomputing Center, and the UK government to support the country's build of \"the world's fastest AI supercomputers called Isambard-AI,\" underscoring the chipmaker's extensive reach into emerging opportunities across the public sector as well.Competitive TCONvidia's latest innovations have also enabled a competitive TCO (or total cost of ownership) advantage. This continues to effectively address customers' demands for greater cost efficiencies in the process of improving performance needed to facilitate increasingly complex workloads.For instance, Nvidia has been actively stepping up its complementary software capabilities in order to optimize TCO on its portfolio of hardware offerings for customers. This includes the recent introduction of TensorRT-LLM, which combined with the H100 GPU is capable of delivering up to 8x better performance relative to the A100 GPU alone on small language models, while also reducing TCO by as much as 5.3x and energy costs by as much as 5.6x.developer.nvidia.comdeveloper.nvidia.comThe improvements are realized through a compilation of innovations spanning tensor parallelism, in-flight batching, and quantization:Tensor parallelism: The feature enables inferencing - or the process of having a trained model make predictions on live data - at scale. Historically, developers have had to make manual adjustments to models in order to coordinate parallel execution across multiple GPUs and optimize LLM inference performance. However, tensor parallelism eliminates a large roadblock by allowing large advanced language models to \"run in parallel across multiple GPUs connected through NVLink and across multiple servers without developer intervention or model changes,\" thus significantly reducing time and cost to deployment.In-flight batching: Inflight-batching is an \"optimized scheduling technique\" that allows an LLM to continuously execute multiple tasks simultaneously. The feature effectively optimizes GPU usage and minimizes idle capacity, which inadvertently improves TCO.With in-flight batching, rather than waiting for the whole batch to finish before moving on to the next set of requests, the TensorRT-LLM runtime immediately evicts finished sequences from the batch. It then begins executing new requests while other requests are still in flight.Source: NVIDIA Technical Blog.Quantization: This is the process of reducing the memory in which the billions of model weight values within LLMs occupy in the GPU. This effectively lowers memory consumption in model inferencing on the same hardware, while enabling faster performance and higher accuracy. TensorRT-LLM automatically enables the quantization process, converting model weights into lower precision formats without any moderation required to the model code.Taken together, TensorRT-LLM troubleshoots the major optimization requirements demanded from enterprise and CSP deployments and continues to provide validation to the value proposition of the NVIDIA CUDA and hardware ecosystem. Coupled with compatibility with major LLM frameworks, such as Meta Platforms' (META) Llama 2 and OpenAI's GPT-2 and GPT-3, which the most common/popular generative AI capabilities on built on, the latest introduction of TensorRT-LLM is expected to further reinforce Nvidia's capture of growth opportunities ahead. By improving TCO, Nvidia also plays a critical role in expanding the reach of generative AI, which in turn reinforces a sustained demand for flywheel for its offerings.Full Stack AdvantageNvidia has also bolstered its full-stack advantage in recent years through the build-out of its complementary software-hardware ecosystem. This has been a key source of reinforcement for its sustained trajectory of growth in our opinion, as the strategy increases demand stickiness while also enabling monetization through every stage of the emerging AI opportunity, spanning hardware/infrastructure build-out, foundation model development, and consumer-facing application deployment.On the hardware front, in addition to the demand for accelerators as discussed in the earlier section, Nvidia has also become a key beneficiary of increased demand for networking solutions. Specifically, the company has expanded the annualized revenue run rate for its networking business beyond $10 billion, bolstered by accelerating demand for its proprietary InfiniBand and NVLink networking technologies. As discussed in previous coverage, Microsoft has spent \"several hundred million dollars\" on networking hardware just to link up the \"tens of thousands of [NVIDIA GPUs]\" needed to support the supercomputer it uses for AI training and inference. This includes NVLink and \"over 29,000 miles of InfiniBand cabling,\" highlighting the criticality of Nvidia's networking technology in enabling \"scale and performance needed for training LLMs.\"Meanwhile, on the software front, Nvidia is progressing toward a $1 billion annualized revenue run rate on related offerings by the end of fiscal 2024. These offerings include DGX Cloud, which facilitates compute demands from customers for training and inferencing complex generative AI workloads in the cloud, as well as NVIDIA AI Enterprise, which comprises comprehensive tools designed for streamlining the development and deployment of custom AI solutions for customers.Taken together, the combined ecosystem spanning hardware, software and support services is expected to deepen Nvidia's reach into impending growth opportunities stemming from the advent of AI and beyond. It also offers a diversified revenue portfolio in our opinion, which mitigates Nvidia's exposure to the imminent downcycle for hardware demand in the future.Fundamental and Valuation ConsiderationsThe combination of rising accelerated computing adoption, a competitive TCO advantage, and Nvidia's comprehensive business model is expected to reinforce the chipmaker's sustained long-term growth trajectory. While the data center segment has been the key beneficiary of the demand drivers discussed in the foregoing analysis, they are also expected to unlock adjacent opportunities to Nvidia's other business avenues.This is in line with industry views that the PC and smartphone markets are poised to benefit from an impending AI shift, which potentially harbingers the next growth cycle for Nvidia's core gaming business. The emergence of industrial generative AI is also poised to unlock the synergies of NVIDIA AI and NVIDIA Omniverse, reinforcing the longer-term growth story for the professional visualization segment. Meanwhile, the automotive segment is also expected to benefit from the ramp of AI-enabled ADAS / self-driving solutions reliant on Nvidia solutions such as NVIDIA DRIVE.Adjusting our previous forecast for Nvidia's actual F3Q24 performance and forward prospects based on the foregoing discussion, we expect the company to finish the year with revenue growth of 119% y/y to $59.1 billion. This would imply total revenue of $20.2 billion for the current quarter, in line with management's guidance.Considering the data center segment's positioning as the key beneficiary of existing AI tailwinds, as well as Nvidia's sales mix in recent quarters, relevant revenues are expected to expand by 205% y/y to $45.8 billion for fiscal 2024. Data center revenue growth is expected to remain in the high double-digit percentage range through fiscal 2026 and normalize at lower levels thereafter. This is in line with the impending market opportunities as outlined by management and as discussed in the foregoing analysis, as well as expectations for software and services opportunities being key growth drivers in the data center segment as the next cycle of hardware inventory digestion settles in.AuthorThe impending growth of higher-margin data center revenues is expected to bolster the sustainability of Nvidia's unmatched profit margins within the foreseeable future. We expect GAAP-based gross margins of 72% for fiscal 2024, with normalization towards the mid-70% range over the longer term as complementary high-margin software and services revenues continue to scale.AuthorNvidia_-_Forecasted_Financial_Information.pdfWe are setting a base case price target of $448 for Nvidia stock.AuthorThe base case price target is derived under the discounted cash flow (\"DCF\") analysis, which takes into consideration cash flow projections in line with the fundamental analysis discussed in the earlier section. The analysis applies an 11% WACC, in line with Nvidia's capital structure and risk profile relative to the estimated normalized benchmark Treasury rate of 4.5% under the \"higher for longer\" monetary policy stance. The analysis assumes a 5.6% perpetual growth rate on projected fiscal 2028 ETBIDA, which is in line with Nvidia's key demand drivers discussed in the foregoing analysis. The perpetual growth rate applied is equivalent to 3% applied on projected fiscal 2023 EBITDA when Nvidia's growth profile is expected to normalize in line with the pace of estimated economic expansion across its core operating regions.AuthorAuthorChina RisksThe re-emergence of China headwinds following Washington's updated export rules on advanced semiconductor technologies is likely a culprit of the Nvidia stock's recent post-earnings pullback. While AI tailwinds have largely taken precedence over Nvidia's exposure to China risks this year, the recent updates made to U.S. export rules have brought the relevant challenges back into focus.Specifically, the new rules prevent shipments of the Nvidia A800 GPUs (a less powerful variant of the A100 GPUs tailored for the Chinese market to comply with previous U.S. export regulations) to China and require regulatory approval on technologies that fall below, but come close, to the new rules' controlled threshold. Management expects the updated policies, which took effect after October 17, to be a 20% to 25% headwind on data center sales in the current period, though surging demand from other regions is expected to more than compensate for the relevant loss of market share in China and other affected markets.Based on a simple back-of-the-napkin calculation that adds 20% to our F4Q24 data center revenue estimate to reflect the scenario in which Nvidia continues to partake in China-related GPU demand while keeping all other growth and valuation assumptions unchanged, the ensuing fundamental prospects would yield a base case price target of $508.AuthorAuthorAuthorThis is in line with the stock's performance prior to Nvidia's latest earnings release, highlighting the markets expectations for perfect execution being priced in. As such, we expect Nvidia's eventual introduction of a regulation-compliant solution to be a key near-term driver of incremental upside potential in the stock. However, with \"as many as 50 companies in China that are now working on technology that would compete with Nvidia's offerings,\" uncertainties remain on the timing and extent to which the U.S. chipmaker could recapture lost market share in the Chinese market. This is also in line with management's expectations for immaterial contributions from the Chinese market to data center segment sales in the current period.Final ThoughtsWhile AI-driven growth opportunities have ushered Nvidia's admission to the $1+ trillion market cap club, the stock has largely stayed rangebound in recent months. The stock has also showcased challenges in staying sustainably above the $500-level. Our analysis expects a compelling risk-reward set-up at our base case price target of $448, which considers ongoing macro-related multiple compression risks, uncertainties to the pace of cyclical recovery in consumer-facing verticals such as gaming and automotive, as well as regulatory headwinds facing Nvidia's key Chinese market. However, market confidence in the impending introduction of a rule-compliant data center GPU solution for the Chinese market remains one of the key near-term drivers for propelling the Nvidia stock back towards the $500-level heading into calendar 2024.","news_type":1},"isVote":1,"tweetType":1,"viewCount":216,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"lives":[]}