To The Moon
Home
News
TigerAI
Log In
Sign Up
edmoney
+Follow
Posts · 2
Posts · 2
Following · 0
Following · 0
Followers · 0
Followers · 0
edmoney
edmoney
·
2025-09-27
..
On the competition to invest in OpenAI, the AI bubble, ASIC...Jensen Huang answers it all
黄仁勋表示,OpenAI很可能会成为下一个万亿美元级别的公司,很遗憾没有早点投资。未来5年内,AI驱动的收入将从1000亿美元增至万亿美元级别,现在就可能达到了。关于ASIC的竞争,英伟达放话,即使竞争对手将芯片价格定为零,客户仍然选择英伟达,因为系统运营成本更低。
On the competition to invest in OpenAI, the AI bubble, ASIC...Jensen Huang answers it all
看
217
回复
Comment
点赞
1
编组 21备份 2
Share
Report
edmoney
edmoney
·
2023-07-12
$汇丰控股(00005)$
..
看
360
回复
Comment
点赞
Like
编组 21备份 2
Share
Report
Load more
No followers yet
Most Discussed
{"i18n":{"language":"en_US"},"isCurrentUser":false,"userPageInfo":{"id":"4138941098349802","uuid":"4138941098349802","gmtCreate":1675838000960,"gmtModify":1686809510133,"name":"edmoney","pinyin":"edmoney","introduction":"","introductionEn":"","signature":"","avatar":"https://community-static.tradeup.com/news/default-avatar.jpg","hat":null,"hatId":null,"hatName":null,"vip":1,"status":2,"fanSize":0,"headSize":1,"tweetSize":2,"questionSize":0,"limitLevel":999,"accountStatus":4,"level":{"id":1,"name":"萌萌虎","nameTw":"萌萌虎","represent":"呱呱坠地","factor":"评论帖子3次或发布1条主帖(非转发)","iconColor":"3C9E83","bgColor":"A2F1D9"},"themeCounts":0,"badgeCounts":0,"badges":[],"moderator":false,"superModerator":false,"manageSymbols":null,"badgeLevel":null,"boolIsFan":false,"boolIsHead":false,"favoriteSize":1,"symbols":null,"coverImage":null,"realNameVerified":"init","userBadges":[],"userBadgeCount":0,"currentWearingBadge":null,"individualDisplayBadges":null,"crmLevel":2,"crmLevelSwitch":0,"location":null,"starInvestorFollowerNum":0,"starInvestorFlag":false,"starInvestorOrderShareNum":0,"subscribeStarInvestorNum":0,"ror":null,"winRationPercentage":null,"showRor":false,"investmentPhilosophy":null,"starInvestorSubscribeFlag":false},"page":1,"watchlist":null,"tweetList":[{"id":483054611509648,"gmtCreate":1758963571551,"gmtModify":1758970493855,"author":{"id":"4138941098349802","authorId":"4138941098349802","name":"edmoney","avatar":"https://community-static.tradeup.com/news/default-avatar.jpg","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4138941098349802","idStr":"4138941098349802"},"themes":[],"htmlText":"..","listText":"..","text":"..","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":1,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/483054611509648","repostId":"2570746885","repostType":2,"repost":{"id":"2570746885","kind":"news","pubTimestamp":1758951010,"share":"https://ttm.financial/m/news/2570746885?lang=en_US&edition=fundamental","pubTime":"2025-09-27 13:30","market":"hk","language":"zh","title":"On the competition to invest in OpenAI, the AI bubble, ASIC...Jensen Huang answers it all","url":"https://stock-news.laohu8.com/highlight/detail?id=2570746885","media":"华尔街见闻","summary":"黄仁勋表示,OpenAI很可能会成为下一个万亿美元级别的公司,很遗憾没有早点投资。未来5年内,AI驱动的收入将从1000亿美元增至万亿美元级别,现在就可能达到了。关于ASIC的竞争,英伟达放话,即使竞争对手将芯片价格定为零,客户仍然选择英伟达,因为系统运营成本更低。","content":"<p><html><head></head><body>Huang Renxun said that OpenAI is likely to become the next trillion-dollar company, and it's a pity that it didn't invest earlier. In the next five years, AI-driven revenue will increase from $100 billion to a trillion dollar level, and it may be reached now. Regarding the competition of ASIC, Nvidia said that even if competitors set the chip price at zero, customers still choose Nvidia because the system operating cost is lower.</p><p>Recently, Huang Renxun, founder and CEO of Nvidia, was a guest of \"Bg2 Pod\" bi-weekly dialogue program, and had an extensive conversation with hosts Brad Gerstne and Clark Tang.</p><p><p style=\"text-align: justify;\">During the dialogue, Huang Renxun talked about the US$100 billion cooperation with OpenAI, and expressed his views on topics such as AI competition pattern and sovereign AI prospects.</p><p><p style=\"text-align: justify;\">Huang Renxun said that AI competition is now fiercer than ever before,<strong>The market has evolved from a simple \"GPU\" to a complex and continuously evolving \"AI factory\".</strong>Diverse workloads and exponentially growing inference tasks need to be handled.</p><p><p style=\"text-align: justify;\">He predicts that if AI brings $10 trillion in added value to global GDP in the future,<strong>Then the annual capital expenditure of the AI factory behind it needs to reach $5 trillion.</strong></p><p><p style=\"text-align: justify;\">Talking about cooperation with OpenAI, Huang Renxun said,<strong>OpenAI is likely to become the next trillion-dollar hyperscale company. The only regret is that it didn't invest more sooner,</strong>\"All the money should be given to them\".</p><p><p style=\"text-align: justify;\">In terms of AI commercialization prospects, Huang Renxun predicts that,<strong>In the next five years, AI-driven revenue will increase from $100 billion to a trillion dollar level.</strong></p><p><p style=\"text-align: justify;\">Regarding the competition of ASIC, Nvidia said,<strong>Even if competitors set the chip price at zero, customers will still choose Nvidia because their systems are cheaper to operate.</strong></p><p><p style=\"text-align: justify;\">The following are the highlights of the conversation:</p><p><ul style=\"\"><li>OpenAI wants to establish a \"direct relationship\" with Nvidia similar to Musk and X, including a direct working relationship and a direct procurement relationship.</p><p></li><li>Assuming that AI brings $10 trillion in value-added to global GDP, and this $10 trillion in Token generation has a gross profit margin of 50%, then $5 trillion of it needs a factory and an AI infrastructure,<strong>So the reasonable annual capital expenditure for this plant is about $5 trillion.</strong></p><p></li><li><strong>10 GW would require about $400 billion in investment, and that $400 billion would largely need to be funded through OpenAI's offtake agreement, aka their exponentially growing revenue.</strong>This has to be financed through their capital, through equity financing, and debt that can be raised.</p><p></li><li><strong>The probability of AI-driven revenue increasing from $100 billion to $1 trillion in the next five years is almost certain</strong>And it has now almost been reached.</p><p></li><li><strong>The global shortage of computing power is not due to the shortage of GPUs, but because the orders of cloud service vendors often underestimate the future demand, resulting in Nvidia's long-term \"emergency production mode\".</strong></p><p></li><li>Huang Renxun said,<strong>Nvidia's only regret is that OpenAI invited us to invest early on, but we were too poor at the time to invest enough money, and I should have given them all my money.</strong></p><p></li><li><strong>Nvidia is likely to become the first company in the $10 trillion class.</strong>A decade ago, people said there could never be a trillion dollar company. Now there are 10.</p><p></li><li>AI is now more competitive than ever, but it is also more difficult than ever. That's because wafer costs are getting higher, which means you can't achieve an X-fold growth factor unless you do co-design at the extreme scale, says Huang.</p><p></li><li>Google has the advantage of being forward-thinking. They started TPU1 before everything started.<strong>When TPU becomes a big business, customer-owned tools will become the mainstream trend.</strong></p><p></li><li><strong>The competitive advantage of Nvidia's chips lies in the total cost of ownership (TCO).</strong>Huang says its competitors are building cheaper ASICs and can even set the price at zero. The goal is that even if they set the chip price at zero, you will still buy the Nvidia system because the total cost of running it is still more cost effective than buying the chip (land, power and infrastructure are already worth $15 billion).</p><p></li><li>Nvidia chips have twice as much performance or tokens per watt as other chips,<strong>While the performance per unit of energy consumption is also much higher, customers can generate twice as much revenue from their data centers. Who doesn't want twice as much income?</strong></p><p></li><li>Every country must build sovereign AI.<strong>No one needs an atomic bomb, but everyone needs AI.</strong></p><p></li><li>Just like when motors replaced labor and physical activity, now we have AI. AI supercomputing and AI factory will generate tokens to enhance human intelligence,<strong>Future AI accounts for roughly 55-65% of global GDP, which is about $50 trillion.</strong></p><p></li><li><strong>AI is not a zero-sum game.</strong>\"The more ideas I have, the more problems I imagine we can solve, the more jobs we create, the more jobs we create,\" Huang said.</p><p></li><li><strong>One of the things that will be really cool and will be addressed over the next 5 years is the convergence of AI with robotics.</strong></p><p></li><li><strong>Even regardless of the new opportunities created by AI, the mere fact that AI has changed the way things are done has tremendous value.</strong>It's like switching to electricity instead of kerosene lamps, and switching to jets instead of propeller aircraft.</p><p></li></ul><p style=\"text-align: justify;\">The following is a summary of the full text of Huang Renxun's dialogue, which is translated by AI tools, and Wall Street's insights have been deleted:</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Jensen, it's great to be back again, and of course my partner Clark Tang. I can't believe it – welcome to NVIDIA.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Oh, and beautiful glasses.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">They look really good on you. The problem is that everyone will want you to wear them all the time these days. They'll say, \"Where are the red glasses?\" I can testify about it.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">It has been over a year since we last recorded the show. More than 40% of revenue today comes from reasoning, but reasoning is about to change because of the advent of the chain of reasoning. It's about to grow a billion times, right? A million times, a billion times.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">right. This is exactly the part that most people haven't fully internalized yet. That's the industry we've been talking about. This is the Industrial Revolution. To be honest, it feels like you and I have continued the show every day since then. In AI time, this has been about a hundred years.</p><p><p style=\"text-align: justify;\">I recently re-watched the show and many of the things we discussed stood out. One of the most profound things for me was when you were slapping the table and emphasizing — remember when pre-training was at some sort of low tide and people were saying, \"God, pre-training is over, right? Pre-training is over. We're not going to continue. We're overbuilding.\" This was about a year and a half ago. You say that reasoning will not grow 100 times, 1000 times, but a billion times.</p><p><p style=\"text-align: justify;\">Which brings us to where we are today. You announced this huge deal. We should start there.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I underestimated it. Let me make it official. I reckon we have three scaling laws right now, right? We have pre-trained scaling laws. We have the post-training scaling law. Post-training is basically like AI exercises.</p><p><p style=\"text-align: justify;\">Yes, practice a skill until you master it. So it tries a variety of different approaches, and in order to do that, you have to reason. So now training and reasoning are integrated in reinforcement learning. It's very complicated. This is called post-training. Then the third is reasoning. The old way of reasoning is a one-off, right? But the new way of reasoning we endorse is thinking. So think before you answer.</p><p><p style=\"text-align: justify;\">Now you have three scaling laws. The longer you think about it, the better quality of the answers you get. In the process of thinking, you conduct research and examine some basic facts. You learn something, think a little more, learn a little more, and then generate the answer. Don't just generate it in the first place. So thinking, post-training, pre-training, we now have three scaling laws instead of one.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">You knew this last year, but what is your level of confidence this year in reasoning that will grow by a billion times and how far that will take the level of intelligence? Are you more confident this year than you were last year?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I'm more confident this year, and the reason is that looking at the agent system now, AI is no longer a language model, AI is a language model system, and they are all running concurrently, possibly using tools. Some of us use tools, some do research.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">There's a ton of developments across the industry at the moment, it's all about multimodality, and looking at all the video content that is being generated, it's really amazing technology. And that really brings us to the pivotal moment that everyone is talking about this week, which is the massive partnership with OpenAI Stargate that you announced a few days ago, where you are going to be a priority partner to invest $100 billion in the company over a period of time. They will build 10 gigawatt facilities, which could generate up to $400 billion in revenue for Nvidia if they use Nvidia products in those 10 gigawatt facilities. Please help us understand this partnership, what it means to you guys and why this investment is so reasonable for Nvidia.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">First, I'll answer the last question before elaborating on it.<strong>I think OpenAI is likely to be the next hyperscale company at the trillion dollar level.</strong></p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Why do you call it a hyperscale company? Hyperscale is like Meta is a hyperscale company and Google is a hyperscale company. They'll have consumer and enterprise services, and they're likely to be the next hyperscale company in the world's trillion dollar class. I think you'll agree with that.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I agree. If so, the opportunity to invest before they reach that goal, one of the smartest investments we can imagine. You have to invest in areas that you understand, and we happen to understand that area. So the opportunity to invest, the return on this money will be fantastic. So we really like this investment opportunity. We didn't have to invest, it wasn't necessary for us to invest, but they gave us the opportunity to invest, which was a great thing.</p><p><p style=\"text-align: justify;\">Now let me start at the beginning. We are working with OpenAI on several projects.<strong>The first project is the construction of Microsoft Azure.</strong>We will continue to do so, and this cooperation is progressing very well. We still have a few years of construction work to do, and there are hundreds of billions of dollars of work there alone.</p><p><p style=\"text-align: justify;\"><strong>The second is the construction of OCI,</strong>I think there are about five, six, seven gigawatts to be built. So we're building with OCI, OpenAI and SoftBank, and those projects are all contracted, and there's a lot of work to be done.<strong>And the third is Core Weave</strong>These are still in the context of OpenAI.</p><p><p style=\"text-align: justify;\">So the question is, what is this new partnership? This new partnership is about helping OpenAI, working with OpenAI to build their own AI infrastructure for them for the first time. This is our direct collaboration with OpenAI at the chip level, software level, system level, AI factory level to help them become a fully operational hyperscale company. This will continue for a while, and it will complement the two exponential increases they are experiencing.<strong>The first exponential growth is an exponential increase in the number of customers,</strong>The reason is that AI is getting better, use cases are improving, and almost every app is now connected to OpenAI, so they are experiencing exponential growth in usage.<strong>The second exponential growth is the calculated exponential growth per use.</strong>Now it's not just a reasoning, but thinking before answering.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">These two exponential increases compound their computing demand. So we have to build all these different projects. The last project is an addition to everything they've already announced, to everything we've already worked with them, and it will support this incredible exponential growth.</p><p><p style=\"text-align: justify;\">One of the interesting points you said is that in your opinion, they are likely to be trillion dollar level companies, which I think is a good investment. At the same time, they're building their own, and you're helping them build their own data centers. So far they have been outsourcing to Microsoft to build the data center and now they want to build their own full stack factory.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">They want…they want…they basically want to have a relationship with us like Elon and X.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I think that's a really important thing when you consider the advantages that Colossus has, they're building the full stack. That's hyperscale because if they don't use that capacity, they can sell it to someone else. Similarly, Stargate is building huge capacity. They think they'll use most of the capacity, but it also allows them to sell it to others. This sounds a lot like AWS, GCP, or Azure.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">They want to build the same direct relationship with us, both a direct working relationship and a direct purchasing relationship. Just like Zuckerberg and Meta's relationship with us, our direct partnerships with Sundar and Google, and our direct partnerships with Satya and Azure. They had reached a large enough scale to think it was time to start building these direct relationships. I am happy to support this, Satya knows about it and Larry knows, everyone is informed and everyone is very supportive.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I found an interesting phenomenon regarding the market landscape of Nvidia's accelerated computing. Oracle is building the $300 billion Colossus project, we know what governments are building, we know what hyperscale cloud service providers are building, Sam is talking about trillion level investment. But the 25 sell-side analysts on Wall Street covering our stock, looking at their consensus expectations, basically see our growth flattening from 2027, with only 8% growth from 2027 to 2030. The only job of these 25 people is to forecast Nvidia's growth rate.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Frankly, we're pretty comfortable with that. We regularly exceed market expectations without any problems. But there is this interesting disagreement. I hear these opinions every day on CNBC and Bloomberg. I think it involves some questioning about shortages leading to surpluses, and they don't believe in sustained growth. They said, \"Well, we recognize your performance in 2026, but in 2027, there may be an oversupply and you won't need that much.\"</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Interestingly, the consensus expectation is that this growth won't happen. We also developed forecasts for the company, taking all of this data into account. This allows me to see that even though we're two and a half years into the AI era, there's a huge divide between what Sam Altman, me, Sundar, Satya are saying and what Wall Street still believes.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I don't think it's a contradiction. Let me explain with three key points, hopefully to help everyone feel more confident in the future of Nvidia.</p><p><p style=\"text-align: justify;\">The first point is the perspective of physical laws, which is the most important point: the era of general computing is over, and the future is accelerated computing and AI computing. What needs to be considered is how many trillions of dollars of computing infrastructure around the world need to be upgraded. When they are updated, it will turn to accelerated calculations. Everyone agrees on this, all saying \"Yes, we totally agree, the age of general computing is over and Moore's Law is dead.” That means general computing will shift to accelerated computing. Our partnership with Intel is about realizing that general-purpose computing needs to converge with accelerated computing to create opportunities for them.</p><p><p style=\"text-align: justify;\">Second, the first application scenario of AI is actually ubiquitous, in search recommendation engines, shopping and other fields. The basic VLC infrastructure, which used to use CPUs for recommendations, is now turning to using GPUs for AI. This is the shift from classical computing to accelerated computing AI. Hyperscale computing is shifting from CPUs to accelerated computing and AI, which involves providing services to Meta, Google, ByteDance, Amazon and others to shift their traditional approach to hyperscale computing to AI, which represents a hundreds of billions of dollars market.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Considering platforms like TikTok, Meta, Google, and more, there may be 4 billion people worldwide whose demand workloads are driven by accelerated computing.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">So even without considering the new opportunities created by AI, just AI changing the way things are done is of great value. It's like switching to electricity instead of kerosene lamps, and switching to jets instead of propeller aircraft.</p><p><p style=\"text-align: justify;\">All I've been talking about so far are these fundamental shifts. And when you turn to AI and turn to accelerated computing, what new applications will emerge? That's all the AI-related opportunities we're talking about.</p><p><p style=\"text-align: justify;\">Simply put, just like when motors replaced labor and physical activity, now we have AI. These AI supercomputers and AI factories I mentioned will generate tokens to enhance human intelligence. Artificial intelligence accounts for roughly 55-65% of global GDP, which is about $50 trillion. That $50 trillion will be enhanced.</p><p><p style=\"text-align: justify;\">Let's start with a single person. Suppose I hire an employee with a $100,000 salary and then equip him with $10,000 AI. If this $10,000 AI could make this $100,000 employee 2x, 3x more productive, would I do that? Without hesitation. I'm doing this on everyone in the company right now, every software engineer, every chip designer has AI collaborating with them, 100% coverage. The result is that we are making better quality chips, growing in quantity, and increasing in speed. Our company grows faster, employs more people, is more productive, earns more, and is more profitable.</p><p><p style=\"text-align: justify;\">Now apply Nvidia's story to global GDP. What is likely to happen is that $50 trillion is enhanced by $10 trillion. That $10 trillion needs to run on machines.<strong>What makes AI different from the past is that in the past, software was pre-written and then run on the CPU and operated by people. But in the future, AI is going to generate tokens, and machines must generate tokens and think, so the software is always running. In the past, software was written once, but now software is actually written all the time and thinking all the time. In order for AI to think, it needs a factory.</strong></p><p><p style=\"text-align: justify;\">Suppose that this 10 trillion token generation has a gross profit margin of 50%, of which 5 trillion requires factories and AI infrastructure. If you told me that annual global capital expenditures are about $5 trillion, I would say that the math is reasonable. That's the future: move from Excel general-purpose computing to accelerated computing, replace all hyperscale servers with AI, and then enhance the human intelligence of global GDP.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">At present, we estimate that the annual revenue of this market is about 400 billion US dollars. So TAM is going to grow 4 to 5 times than it is now.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Last night, Alibaba's Eddie Woo said that between now and the end of this decade, they will increase data center power by 10 times. Nvidia's revenue is pretty much power related. He also said that token generation doubles every few months.<strong>This means that performance per watt must continue to grow exponentially. That's why Nvidia keeps breaking through in performance per watt, because in this future, revenue per watt is basically revenue.</strong></p><p><p style=\"text-align: justify;\">This hypothesis contains very interesting historical context. Since 2000, there has been basically no growth in GDP. Then the Industrial Revolution came and GDP accelerated. The digital revolution has come, and GDP has accelerated again. What you said, like Scott Besson said, he thinks next year we're going to have 4% GDP growth. What you mean is that world GDP growth will accelerate because now we give the world billions of colleagues who work for us. If GDP is the output of fixed labor and capital, it must accelerate growth.</p><p><p style=\"text-align: justify;\">Take a look at the changes brought about by AI technology. AI technology, including large language models and all AI agents, is creating a new AI agent industry. OpenAI is the fastest revenue-growing company in history, growing exponentially. AI itself is a fast-growing industry because AI needs the factories and infrastructure behind it. This industry is growing, my industry is growing, and the industries below my industry are also growing. Energy is growing, and this is a renaissance for the energy industry, with all the companies in the infrastructure ecosystem like nuclear, gas turbines and so on doing well.</p><p><p style=\"text-align: justify;\">These numbers have everyone talking about surpluses and bubbles. Zuckerberg said on the podcast last week that he thinks it's likely there will be a vacuum period at some point where Meta could overspend by $10 billion, but that doesn't matter because it's so important to the future of their business that it's a risk they have to take. It sounds a bit like a prisoner's dilemma, but these are very happy prisoners.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">We estimate $100 billion in AI revenue in 2026, excluding Meta, and excluding GPUs running recommendation engines, excluding other features like search. The hyperscale server industry already has a scale of trillions, and this industry will turn to AI. Skeptics will say we need to grow from $100 billion in AI revenue in 2026 to at least $1 trillion in AI revenue in 2030. You just mentioned $5 trillion when you talked about global GDP. If you analyze from the bottom up,<strong>Can you see AI-driven revenue growing from $100 billion to $1 trillion in the next 5 years? Are we growing that fast?</strong></p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes, I will also say that we have reached it. Because hyperscalers have moved from CPUs to AI, their entire revenue base is now AI-driven.</p><p><p style=\"text-align: justify;\">You can't do TikTok without AI, you can't do short YouTube videos, and you can't do these things without AI. The amazing work Meta's doing in customizing content and personalizing it, can't go without AI. Previously, these jobs were done by people, i.e. four choices were created beforehand and then selected through a recommendation engine. Now it becomes an unlimited number of choices generated by AI.</p><p><p style=\"text-align: justify;\">But these things are already happening, and we've transitioned from CPUs to GPUs, mostly for these recommendation engines, which have been fairly new in the last three or four years. When I meet Zach at Sigraph, he will tell you that they do get off to a late start with GPUs. Meta has been using GPUs for about two and a half years, which is fairly new. Doing searches on GPUs is absolutely new.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">So your argument is,<strong>The probability that we will have trillion dollars in AI revenue by 2030 is almost certain because we have almost reached it.</strong>Let's talk about the incremental part from now on. I just heard your top-down analysis about the percentage of global GDP while you were doing the bottom-up or top-down analysis. What do you think is the probability that we will run into an oversupply in the next three, four or five years?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">It's a distribution and we don't know the future. Until we completely convert all general-purpose computing to accelerated computing and AI, I think this is extremely unlikely. It will take years.</p><p><p style=\"text-align: justify;\">Until all recommendation engines are based on AI, until all content generation is based on AI, because consumer-facing content generation is very much recommendation systems and so on, all of which will be generated by AI. Until all classic hyperscale businesses turn to AI, everything from shopping to e-commerce, until everything turns around.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">But all this new construction, when we're talking about trillion dollars, we're investing for the future. Do you have an obligation to invest those funds even if you see a slowdown or some sort of oversupply coming? Or is this you waving the flag at the ecosystem and telling them to go build and if we see a slowdown, we can always reduce the level of investment?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Actually the opposite is true because we are at the end of the supply chain and we respond to demand. Now all the VCs will tell you that demand is in short supply,<strong>There is a computing shortage in the world, not because there is a shortage of GPUs in the world.</strong>If they give me an order, I will produce it. In the last few years, we've really opened up the supply chain, and we're ready for everything from wafer start to packaging, HBM memory and more. If it needs to be doubled, we will double. So the supply chain is ready.</p><p><p style=\"text-align: justify;\">Now we're just waiting for demand signals, and when cloud service providers, hyperscalers and our customers make annual plans and give us forecasts, we'll respond and produce accordingly. Of course,<strong>What happens now is that every forecast they provide us is wrong because they are under-forecasting so we are always in emergency mode.</strong>We've been in emergency mode for a few years now, and whatever forecast we receive is a significant increase over last year, but still not enough.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Satya seemed to back down a bit last year, with some calling him the adult in the room, suppressing those expectations. A few weeks ago he said that this year we have built 2 Gigabit and in the future we will accelerate. Do you see some traditional hyperscalers that might move a little slower than CoreWeave or Elon X, maybe a little slower than Stargate? It sounds like they're all more actively engaged right now.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Because the second exponential growth.</p><p><p style=\"text-align: justify;\">We've already experienced an exponential increase in AI adoption and engagement.<strong>The second exponential growth is the ability to reason.</strong>This is a conversation we had a year ago. We said a year ago that when you switch AI from one-off, memorizing answers to memorizing and generalizing (which is basically pre-trained), like remembering what 8 by 8 equals, this is one-off AI. A year ago, reasoning appeared, tool use appeared, and now you have thinking AI, which is 1 billion times more computational.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Certain hyperscale customers have internal workloads, and they have to migrate from general-purpose computing to accelerated computing anyway, so they build on and on. I think maybe some hyperscalers have different workloads, so they're not sure how quickly they can digest it, but now everyone has concluded that they're badly under-built.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">One of my favorite applications is traditional data processing, both structured and unstructured data. We will soon announce a very large program for accelerated data processing. Data processing represents the use of the vast majority of CPUs in the world today. It still runs entirely on the CPU. If you go to Databricks, it's mostly the CPU; Go to Snowflake, mostly CPU; Oracle's SQL processing, mainly CPU. Everyone is using the CPU for SQL structured data processing. In the future, these will turn to AI data processing.</p><p><p style=\"text-align: justify;\">This is a huge market that we are going to enter. But you need everything that NVIDIA does, you need acceleration layers and domain-specific data processing recipes, and we have to build that.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">When I opened CNBC yesterday, they were talking about an oversupply bubble; When I opened Bloomberg, it was talking about the revolving income issue.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">When someone questions our investment and business relationships with companies like OpenAI, I need to clarify a few points. First, revolving revenue arrangements often refer to companies entering into misleading transactions that artificially inflate revenue without any underlying economic substance. In other words, this is growth driven by financial engineering rather than customer demand. The typical case that everyone is citing is of course Cisco and Nortel in the last bubble 25 years ago.</p><p><p style=\"text-align: justify;\">When we or Microsoft or Amazon invest in companies that are also our big customers, such as we invest in OpenAI and OpenAI buys tens of billions of chips, I want to remind everyone that those analysts on platforms like Bloomberg are wrong to worry excessively about revolving revenue or round-trip transactions.</p><p><p style=\"text-align: justify;\">10 GW would require about $400 billion in investment, and that $400 billion would largely need to be funded through their offtake agreements, i.e. their exponentially growing revenue. This has to be financed through their capital, through equity financing, and debt that can be raised. These are three financing instruments. The equity and debt they are able to raise is related to the confidence they are able to sustain in their income. Smart investors and lenders consider all of these factors. Fundamentally, that's what they're going to do, it's their company, not my business. Of course, we must stay in close contact with them and ensure that our build supports their continued growth.</p><p><p style=\"text-align: justify;\">The income side has nothing to do with the investment side. The investment side is not tied to anything, it is an opportunity to invest in them. As we mentioned before, this could very well be the next multi-trillion dollar hyperscale company. Who wouldn't want to be an investor in it?<strong>My only regret is that they invited us early to invest. I remember those conversations where we were too poor to invest enough and I should have given them all my money.</strong></p><p><p style=\"text-align: justify;\">The reality is that if we don't do our job to keep up, if Rubin doesn't become a good chip, they can source other chips to put into these data centers. They have no obligation to have to use our chips. As mentioned earlier, we view this as an opportunistic equity investment.</p><p><p style=\"text-align: justify;\">We've also made some great investments, and I have to say it, we've invested in XAI, we've invested in Corewave, which are great and very smart.</p><p><p style=\"text-align: justify;\">Getting back to the root question, we are open and transparent about what we are doing. There's underlying economic substance here, and we're not simply sending revenue back and forth between the two companies. Someone pays for ChatGPT every month, and 1.5 billion monthly active users are using the product. Every business will either adopt this technology or it will die. Every sovereign state sees this as an existential threat to its national security and economic security, as does nuclear energy. What person, company, or country would say that intelligence is basically optional for us? It was fundamental to them. Intelligent automation, I have fully discussed the requirements issue.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">When it comes to system design, in 2024 we move to the annual release cycle, starting with Hopper. Then we did a massive upgrade that required a major data center makeover, launched Grace Blackwell. In 2025 and the second half of 2026, we will launch Vera Rubin. Ultra in 2027 and Fineman in 2028.</p><p><p style=\"text-align: justify;\">How is the shift to an annual release cycle progressing? What are the main objectives? Does AI help us execute the annual release cycle? The answer is yes. Without AI, Nvidia's speed, pace, and scale would be limited.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">The answer to that last question is yes. Without AI these days, it is simply impossible to build the products we build.</p><p><p style=\"text-align: justify;\">Why do you do this? Remember Eddie said on the earnings call, Satya said, Sam said, token generation rates are growing exponentially and customer usage is growing exponentially. I think they have about 800 million weekly active users. This comes less than two years after the release of ChatGPT. Each user is generating more tokens because they are using inference time inference.</p><p><p style=\"text-align: justify;\">First, because the token generation rate is growing at an incredible rate, two exponential levels stacked together, the token generation cost will continue to grow unless we improve performance at an incredible rate, because Moore's Law is dead, the transistor cost is basically the same every year, and the power consumption is basically the same. Between these two basic laws, unless we come up with new technologies to reduce costs, even if there is a slight difference in growth, how to make up for two exponential increases by giving someone a few percentage points off?</p><p><p style=\"text-align: justify;\"><strong>Therefore, we have to improve performance every year at a rate that keeps up with the exponential growth.</strong>From Kepler all the way to Hopper, it may be 100,000 times growth, which is the beginning of Nvidia's AI journey, 100,000 times in 10 years. Between Hopper and Blackwell, we achieved 30x growth in a year thanks to NVLink 72, then Rubin will get the X factor again and Fineman will get another X factor.</p><p><p style=\"text-align: justify;\">We do this because transistors don't help us much, Moore's law basically the density is growing but the performance is not. If so, one of the challenges we face is having to break down the whole issue at the system level, while changing each chip and all software stacks and all systems. This is the ultimate extreme co-design, no one has co-designed at this level before. We revolutionize CPUs, GPUs, network chips, NVLink extensions, Spectrum X scale-out.</p><p><p style=\"text-align: justify;\">Someone said \"Oh yeah, it's just Ethernet\". Spectrum X Ethernet isn't just Ethernet. People are starting to find out, oh my gosh, the X-fold factor is pretty incredible. Nvidia's Ethernet business, just Ethernet business, is the fastest growing Ethernet business in the world.</p><p><p style=\"text-align: justify;\">We need to scale up now and of course build bigger systems. We scale across multiple AI factories to connect them together. We carry out this work on an annual cycle. So we have now achieved exponential growth in technology. This allows our customers to reduce token costs, making these tokens smarter and smarter through pre-training, post-training, and thinking.<strong>The result is that as AIs become smarter, their use increases. When usage increases, they will grow exponentially.</strong></p><p><p style=\"text-align: justify;\">For those who may not be familiar, what is Extreme Collaborative Design? Extreme co-design means you have to optimize models, algorithms, systems, and chips at the same time. You have to innovate outside the box. Because Moore's Law says you just have to make the CPU faster and faster and everything will get faster. You're innovating inside the box, just making the chip faster. But if that chip can't go any faster, what are you gonna do? Innovate outside the box. Nvidia really changed things because we did two things. We invented CUDA, we invented the GPU, we invented the idea of large-scale collaborative design. That's why we're involved in all of these industries. We create all these libraries and collaborative designs.</p><p><p style=\"text-align: justify;\">First, the full-stack limit goes even beyond software and GPUs. Now it's switches and networks at the data center level, software in all of these switches and networks, network interface cards, scaling, scaling, optimizing in all of these aspects. Therefore, Blackwell has a 30-fold improvement over Hopper, and no Moore's Law can achieve this. This is the limit, and this comes from the extreme collaborative design. That's why Nvidia went into networking and switching, scaling and scaling, scaling across systems, building CPUs and GPUs and network interface cards.</p><p><p style=\"text-align: justify;\">That's why Nvidia is so rich in software and talent. We contribute more open source software to the world than almost any company except one, I think AI2 or something. We have such huge software resources, and this is only in AI. Don't forget computer graphics, digital biology, self-driving cars, the amount of software produced by our company is incredible, which allows us to do deep and extreme collaborative design.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I've heard from one of your competitors that yes, he does it because it helps reduce token generation costs, but at the same time your annual release cycle makes it nearly impossible for your competitors to keep up. The supply chain is locked in more because you give the supply chain three years of visibility. Now the supply chain has confidence in what they can build. Have you considered this?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Before you ask this question, think about this. In order for us to do hundreds of billions of dollars of AI infrastructure every year, think about how much capacity we had to start preparing a year ago. We're talking about building hundreds of billions of dollars in wafer startups and DRAM procurement.</p><p><p style=\"text-align: justify;\">This has now reached a scale that almost no company can keep up with.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">So do you think your competitive moat is bigger than it was three years ago?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes. First of all,<strong>The competition now is fiercer than ever, but it is also more difficult than ever. The reason I say this is because wafer costs are getting higher, which means you can't achieve an X-fold growth factor unless you do co-design at the extreme scale.</strong>Unless you develop six, seven, eight chips a year, which is remarkable. It's not about building an ASIC, it's about building an AI factory system. The system has a lot of chips, and they're all designed in tandem to provide the 10x factor that we get almost regularly.</p><p><p style=\"text-align: justify;\">First, collaborative design is the limit. Second, scale is the limit. When your customer deploys a gigawatt, that's 400,000 to 500,000 GPUs. Getting 500,000 GPUs working together is a miracle. Your customers take a huge risk on you to buy all of this. You have to ask yourself, what customer would place a $50 billion purchase order on a structure? On an unproven architecture, a new architecture. A brand new chip, you are excited about it, everyone is excited for you, you just showed off the first silicon wafer. Who will give you a $50 billion purchase order? Why would you start a $50 billion wafer for a chip that has just been taped out?</p><p><p style=\"text-align: justify;\">But with Nvidia, we can do that because our architecture has been proven. The scale of our clients is so incredible. Now the size of our supply chain is also incredible. Who would launch all of this for a company, pre-build all of this unless they knew Nvidia was able to deliver? They trust us to deliver to customers all over the world. They are willing to launch hundreds of billions of dollars at once. The scale is incredible.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">One of the biggest key debates and controversies in the world about this is the GPU vs ASIC issue, Google's TPU, Amazon's Trainium, and it seems from ARM to OpenAI to Anthropic are rumored to be building ASICs. Last year you said we're building systems, not chips, and you're driving performance through every part of the stack. You also say that many of these projects will likely never reach production scale. But given most projects, and the apparent success of Google's TPU, how do you see this evolving landscape today?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\"><strong>First, Google has the advantage of being forward-thinking. Remember, they started TPU1 before everything started.</strong>It's no different than a startup. You should create startups before the market grows. You shouldn't show up as a startup when the market reaches a trillion dollar size. This is the fallacy, and all VCs know this fallacy, that if you can take a few percentage points of market share in a large market, you can become a giant company. This is actually fundamentally wrong.</p><p><p style=\"text-align: justify;\">You should have 100% market share in a small industry, as Nvidia and TPU have done. It was just our two companies at the time, but you have to hope that the industry can really go big. You're creating an industry.</p><p><p style=\"text-align: justify;\">The Nvidia story illustrates this point. This is a challenge for companies building ASICs now. While it seems like the market is tempting right now, remember that this tempting market has evolved from a chip called a GPU to the AI factory I just described. You guys just saw that I announced a chip called CPX for context processing and diffusion video generation, which is a very specialized workload, but an important workload within a data center. I just mentioned the possibility of AI data processing processors because you need long-term memory and short-term memory. KV cache processing is very intensive. AI memory is a big problem. You want your AI to have a good memory, just handling the KV cache of the whole system is a very complicated thing and may require a specialized processor.</p><p><p style=\"text-align: justify;\">You can see that Nvidia's view is now not just GPUs anymore. Our view is to look at the entire AI infrastructure and what these great companies need to handle their diverse and ever-changing workloads.</p><p><p style=\"text-align: justify;\">Check out transformer. The transformer architecture is changing dramatically. If it wasn't for CUDA's ease of manipulation and iteration, how could they try a lot of experiments to decide which version of transformer to use, what kind of attention algorithm? How to break it down? CUDA helps you accomplish all of this because it is very programmable.</p><p><p style=\"text-align: justify;\">Now the way to think about our business is when all these ASIC companies or ASIC projects started three, four, five years ago, I have to tell you, that industry is very simple. A GPU is involved. But now it's massive and complex, and in two more years it's going to be completely massive. The scale will be enormous. So I think,<strong>The battle to enter a very large market as a new entrant is difficult. This is true even for customers who may be successful on ASICs.</strong></p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Investors tend to be dualistic creatures who just want a black and white answer of yes or no. But even if you get the ASIC working, isn't there an optimal balance? Because I think when buying the Nvidia platform, CPX will be rolled out for pre-filling, video generation, and possibly decoding, etc.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes, so there will be many different chips or parts added to the accelerated compute cluster of the Nvidia ecosystem, as new workloads are generated. People trying to tape out new chips now aren't really predicting what will happen a year from now, they're just trying to get the chips to work.</p><p><p style=\"text-align: justify;\">In other words, Google is a big customer for GPUs. Google is a very special case and we must show due respect. TPU has reached TPU7. It was also a challenge for them and the work they did was very difficult.</p><p><p style=\"text-align: justify;\">Let me explain, there are three classes of chips. The first category is architectural chips : X86 CPU, ARM CPU, Nvidia GPU. They are architectural, with ecosystems on them, architectures have rich IPs and ecosystems, and the technology is very complex and built by owners like us.</p><p><p style=\"text-align: justify;\">The second category is ASIC. I worked for LSI Logic, the original company that invented the ASIC concept. As you know, LSI Logic no longer exists. The reason is that ASICs are really great when the market size is not very large, it's easy to have someone as a contractor to help you pack everything up and make it on your behalf, and they charge a gross margin of 50-60%. But when the ASIC market got bigger, there was a new practice called customer-owned tools. Who would do that? Apple's smartphone chips. Apple's smartphone chips are so large that they will never pay others 50-60% gross profit margin to make ASICs. They use customer-owned tools.</p><p><p style=\"text-align: justify;\"><strong>So where does TPU go when it becomes big business? Customer-owned tools, no doubt.</strong></p><p><p style=\"text-align: justify;\">But ASIC has its place. Video transcoders are never too big, and smart network cards are never too big. I am not surprised when an ASIC company has 10, 12, 15 ASIC projects because there may be five smart NICs and four transcoders. Are they all AI chips? Of course not. If someone builds an embedded processor as an ASIC for a specific recommendation system, it can certainly be done. But would you use it as the foundational computing engine for AI that is always changing? You have low latency workloads, high throughput workloads, token generation for chat, thinking workloads, AI video generation workloads.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">You're talking about the backbone of accelerated computing.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">That's what Nvidia is all about.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Simply put, it's like the difference between playing chess and checkers. The truth is that companies that are starting to do ASICs today, whether it's Tranium or some other accelerator, they're building a chip that is just a component of a larger machine.</p><p><p style=\"text-align: justify;\">You built a very complex system, platform, factory, and now you're kind of open up a little bit. You mentioned the CPX GPU, and in some ways you are breaking down the workload into the best pieces of hardware in that particular area.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">We announced something called Dynamo, decomposed AI workload orchestration, and we opensourced it because the AI factory of the future is decomposed.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">You launched MV Fusion and even said to your competitors including Intel, the way you're involved in this factory that we're building, because no one else is crazy enough to try to build the whole factory, but if you have something that's good enough and compelling enough for the end user to say, \"We want to use this instead of an ARM GPU, or we want to use this instead of your inference accelerator,\" you can tap into it.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">We are thrilled to be able to build the connection. Convergence is really a really great idea, and we're excited to work with Intel on this. This requires leveraging Intel's ecosystem, on which most of the world's businesses still run. We merge the Intel ecosystem with the Nvidia AI ecosystem, accelerated computing. We do the same with ARM, and there are several other companies that we will also work with. This opens up opportunities for both of us and is a win-win for both. I will be their big customer and they will expose us to bigger market opportunities.</p><p><p style=\"text-align: justify;\">This goes hand in hand with a point you made that may have shocked some. You say our competitors are building ASICs and all their chips are already cheaper today, but they can even set the price at zero. The goal is that even if they set the chip price at zero, you will still buy the Nvidia system because the total cost of running the system – power, data center, land, etc, the intelligence that is output is still more cost effective than buying the chip, even if the chip is free. Because land, electricity and infrastructure are already worth $15 billion. We have carried out an analysis of this mathematical problem.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">But would you please explain your calculations for us, because I think it's really hard to understand for those who don't think about it very often. Your chips are so expensive,<strong>With competitors' chips priced at zero, how is it possible that yours is still a better choice?</strong></p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">There are two ways of thinking. One is to consider it from an income perspective. Everyone is limited by power consumption, assuming you are able to get an extra 2 gigawatts of power. Then those 2 gigawatts of power you want to be able to convert into revenue. If you have twice as much performance or tokens per watt as anyone else because you do deep and extreme code design,<strong>My performance per unit of energy consumption is much higher, then my customers can generate twice as much revenue from their data centers. Who doesn't want twice as much income?</strong></p><p><p style=\"text-align: justify;\">If someone gives them a 15% discount, the difference between our gross margin (about 75 percentage points) and everyone else's gross margin (about 50 to 65 percentage points) is not enough to make up for the 30x performance difference between Blackwell and Hopper. Let's say Hopper is a brilliant chip and system, let's say someone else's ASIC is Hopper. The Blackwell is 30 times more capable.</p><p><p style=\"text-align: justify;\">So in that 1 gigawatt, you have to give up 30 times your revenue. The price is too great. So even if they provide the chips for free, you only have 2 gigawatts to use, and the opportunity cost is extremely high. You'll always choose the best performance per watt.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I heard from a CFO of a hyperscale cloud service provider that given the performance gains that your chip brings, precisely for this point of tokens per gigawatt, and power being the limiting factor, they have to upgrade to a new cycle. Will this trend continue as you look at Ruben, Ruben Ultra, Fineman?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">We now build six or seven chips a year, and each one is part of the system. System software is ubiquitous and needs to be integrated and optimized on all six or seven chips to achieve Blackwell's 30-fold performance improvement. Now imagine I do this every year, consistently. If you build an ASIC in this family of chips and we optimize it throughout the system, it's a difficult problem to solve.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Which brings me back to the question about the competitive moat at the beginning. We've been watching that, we're investors in the ecosystem and also in your competitors, from Google to Broadcom. But when I think about this from first principles, are you increasing or decreasing your competitive moat?</p><p><p style=\"text-align: justify;\">You move to an annual rhythm to co-develop with the supply chain. The scale is much larger than anyone expected, which requires a balance sheet and the scale of the development. Your initiatives through acquisitions and organic growth, including Envy Fusion, CPX, and more that we just discussed. All of these factors convince me that your competitive moat is strengthening, at least when it comes to building plants or systems. Which is at least surprising. But interestingly, your P/E is much lower than most other companies. I think part of this has to do with the law of large numbers. A $4.5 trillion company can't get any bigger. But I asked you that a year and a half ago, and you're sitting here today, if the market is going to grow to a 10x or 5x increase in AI workloads, we know what capex is doing and so on. In your opinion, are there any conceivable scenarios where your top line in 5 years won't be 2x or 3x higher than in 2025? Given these advantages, what is the probability that revenue won't actually be much higher than it is today?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I replied this way. As I described, our chances are much greater than the consensus. I'm here to say,<strong>I think Nvidia is likely to be the first $10 trillion company.</strong>I've been here long enough that just a decade ago, you should remember, people said there could never be a trillion dollar company. Now we have 10.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">The world is bigger today, back to exponential developments in GDP and growth.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">The world is bigger and people misunderstand what we do. They remember us as a chip company, we do make chips, we make the best chips in the world.</p><p><p style=\"text-align: justify;\">But Nvidia is actually an AI infrastructure company. We are your AI infrastructure partner, and our partnership with OpenAI is the perfect proof. We are their AI infrastructure partner, and we work with people in many different ways. We do not require anyone to purchase all products from us. We do not require them to purchase the entire rack. They can buy chips, components, our network equipment. We have customers who buy only our CPUs, only our GPUs and other people's CPUs and network equipment. We can sell any way you like. My only request is to buy something from us.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">We talked about Elon Musk with X.ai and the Colossus 2 project. As I mentioned before, it's not just about better models, we have to build. We must have world-class builders. And I think the top builder of our country is probably Elon Musk.</p><p><p style=\"text-align: justify;\">We talked about Colossus 1 and the work he did there, building a coherent cluster of hundreds of thousands of H100s and H200s. Now he's working on Colossus 2, which could contain half a million GPUs, the equivalent of millions of H100s in a coherent cluster. I wouldn't be surprised if he reaches the gigawatt level faster than anyone else. What are the advantages of being a builder who can both build software and models and understand what it takes to build these clusters?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I would say that these AI supercomputers are complex systems. The technology is complex, and the procurement is complicated due to financing issues. Getting access to land, electricity and infrastructure, powering it is complicated, building and starting all of these systems. This is perhaps the most complex systemic problem humanity has undertaken to date.</p><p><p style=\"text-align: justify;\">Elon has the huge advantage of having in his mind all these systems working together with each other and all the interdependencies in one mind, including financing. So he's a big GPT, he's a big supercomputer in his own right, the ultimate GPU. He has a big advantage in this, and he has a strong sense of urgency and a genuine desire to build it. When will is combined with skill, the unthinkable happens. It is rather unique.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Next I want to talk about sovereign AI as well as the global AI race. Thirty years ago, you couldn't imagine you'd communicate with the Crown Prince and King in the palace, frequenting the White House. The president says you and Nvidia are vital to U.S. national security.</p><p><p style=\"text-align: justify;\">When I look at the situation, it's hard for you to be in those venues if governments don't see it as at least as existential as we did with nuclear technology in the 1940s. While we don't have a government-funded Manhattan project today, it's funded by Nvidia, OpenAI, Meta, Google. We have companies the same size as the nation today. These companies are funding causes that presidents and kings believe are of existential significance for their future economies and national security. Do you agree with this view?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\"><strong>No one needs an atomic bomb, but everyone needs AI. That's a very big difference.</strong>AI is modern software. That's where I started – from general-purpose computing to accelerated computing, from manual line-by-line code to AI-writing code. This foundation cannot be forgotten, we have reinvented computing. No new species emerged on Earth, we just reinvented computing. Everyone needs to calculate and it needs to be democratized.</p><p><p style=\"text-align: justify;\">This is why all countries are aware that they have to enter the AI world because everyone needs to keep up with computing. No one in the world will say, \"I was using computers yesterday, and tomorrow I'm going to go back to using sticks and fires.\" Everyone needs to move to computing, it's just modernizing.</p><p><p style=\"text-align: justify;\">First, to participate in AI, you have to code your history, culture, and values in AI. Of course, AI is getting smarter and smarter, and even core AI is able to learn these things fairly quickly. You don't have to start from scratch.</p><p><p style=\"text-align: justify;\">So I think every country needs to have some sovereign capabilities. I suggest that they all use OpenAI, Gemini, these open models, and Grok, and I suggest that they all use Anthropic. But they should also devote resources to learning how to build AI. The reason is that they need to learn how to build it, not just for language models, but also AI for industrial models, manufacturing models, national security models. There is a lot of intelligence that they need to cultivate themselves. So they should have sovereign capability and every country should develop it.</p><p><p style=\"text-align: justify;\">Is that the same thing you hear around the world? They are all aware of this. They will all be customers of OpenAI, Anthropic, Grok, and Gemini, but they also do need to build their own infrastructure. That's the important philosophy that Nvidia is doing — we're building infrastructure. Just as every country needs energy infrastructure and communications internet infrastructure, now every country needs AI infrastructure.</p><p><p style=\"text-align: justify;\">Let's start with the rest of the world. Our good friend David Sachs, the AI team did a fantastic job. We are very fortunate to have David and Shriram in Washington, D.C. David is in the job of serving as AISR, and what a smart move President Trump has made to put them in the White House.</p><p><p style=\"text-align: justify;\">At this critical time, the technology is complex. Shriram is the only person in Washington, D.C. that I think understands CUDA, which is strange though. But I just love the fact that at this critical time when technology is complex, policies are complex, and the impact on the future of our country is so significant, we have someone who is clear-minded, invested time in understanding technology, and thoughtfully helped us through it.</p><p><p style=\"text-align: justify;\"><strong>Technology, like corn and steel in the past, is now such a basic trade opportunity.</strong>It is an important part of trade. Why don't you want American technology to be desired by everyone so it can be used for trade?</p><p><p style=\"text-align: justify;\">Trump has done several things, and what has been done is really good for keeping everyone up to date. The first thing is the re-industrialization of the United States, encouraging companies to come to the United States to build, invest in factories, and retrain and upskill the skilled workforce, which is extremely valuable to our country. We love craft, I love people who make things with their hands, and now we're going back to building things, building magnificent incredible things, and I love that.</p><p><p style=\"text-align: justify;\">It will change America, no doubt. We must recognize that reindustrialization in the United States will be fundamentally transformative.</p><p><p style=\"text-align: justify;\">Then of course AI. It is the biggest equalizer. Think about how everyone can have AI now. This is the ultimate equalizer. We have bridged the technological divide. Remember the last time someone had to learn to use a computer for financial or professional benefit, they had to learn C + + or C, or at least Python. Now they just need to learn human language. If you don't know how to program AI, you tell AI, \"Hi, I don't know how to program AI. How do I program AI?\" AI will explain it to you or do it for you. It does it for you. That's incredible, isn't it? We are now bridging the technological divide with technology.</p><p><p style=\"text-align: justify;\">It is something that everyone has to be involved in. OpenAI has 800 million active users. Gosh, it really needs to get to 6 billion. It really needs to hit 8 billion soon. I think that's the number one point. And then point two, point three, I think AI is going to change the task.</p><p><p style=\"text-align: justify;\">What people get confused about is that there are many tasks that will be eliminated and there are many tasks that will actually be created. But it is likely that for many, their jobs are effectively protected.</p><p><p style=\"text-align: justify;\">For example, I've been using AI. You've been using AI. My analysts have been using AI. My engineers, each of them is using AI consistently. We are hiring more engineers, hiring more people, hiring across the board. The reason is that we have more ideas. We can pursue more ideas now. The reason is that our company has become more productive. Because we become more productive, we become richer. We get richer and we can hire more people to pursue these ideas.</p><p><p style=\"text-align: justify;\"><strong>The concept of AI bringing about massive job disruption begins with the premise that we don't have more ideas. It starts with the premise that we have nothing to do.</strong>Everything we do in our lives today is the end. If someone else does that one task for me, I am left with one task. Now I have to sit there waiting for something, waiting for retirement, sitting in my rocking chair. The idea doesn't make sense to me.</p><p><p style=\"text-align: justify;\"><strong>I think intelligence is not a zero sum game. The smarter the people around me, the more genius I have around me, and surprisingly, the more ideas I have, the more problems I imagine we can solve, the more jobs we create, the more jobs we create.</strong>I don't know what the world will be like in a million years, that will be left to my children. But my feeling over the next few decades is that the economy will grow. A lot of new jobs will be created. Every job will be changed. Some work will be lost. We're not going to ride in the streets, these things will all be nice. Humans are notoriously skeptical and bad at understanding composite systems, and they are even worse at understanding exponential systems that accelerate with scale.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">We talked a lot about indexes today. The great futurist Ray Kurzweil said that in the 21st century, we will not have a hundred years of progress. We may have twenty thousand years of progress.</p><p><p style=\"text-align: justify;\">You said earlier that we are blessed to live and contribute to this moment. I wouldn't ask you to look ahead 10, 20, or 30 years because I think it's too challenging. But when we think about 2030, things like robots, 30 years is easier than 2030. Really? Yes. Okay, so I'll give you permission to look ahead 30 years. When you think about the process, I like these shorter time frames because they have to combine bits and atoms, which is the hard part of building these things.</p><p><p style=\"text-align: justify;\">Everyone is saying this is about to happen, which is interesting but not entirely useful. But if we have 20,000 years of progress, think about that quote from Ray, think about exponential growth, and that all of our listeners — whether you're working in government, in a startup, or running a big company — need to think about the speed of accelerating change, the speed of accelerating growth, and how you can work in synergy with AI in this new world.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Many people have said a lot of things and they are all reasonable.<strong>I think one of the things that is really cool and that will be addressed over the next 5 years is the convergence of AI with robotics.</strong>We will have the AI that roams around us. Everyone knows that we will all grow with our own R2-D2. That R2-D2 will remember everything about us, guide us along the way, and be our partner. We already know this.</p><p><p style=\"text-align: justify;\">Everyone will have their own associated GPUs in the cloud, and 8 billion people correspond to 8 billion GPUs, which is a viable outcome. Everyone has a model tailored to themselves. That AI in the cloud will also be embodied in various places-in your car, in your own robot, it's everywhere. I think such a future is very reasonable.</p><p><p style=\"text-align: justify;\">We will understand the infinite complexity of biology, understand the biological system and how to predict it, have a digital twin for everyone. We have our own digital twin in healthcare just as we have a digital twin in Amazon shopping, why not have it in healthcare? Of course there will be. A system that predicts how we are aging, what diseases we might have, and whatever is coming – maybe next week or even tomorrow afternoon – and predicts ahead of time. Of course, we'll have all of that. I take all of this for granted.</p><p><p style=\"text-align: justify;\">I often get asked by the CEOs I work with, what happens now that you have all of that? What should you do? It is common sense for moving things quickly. If you have a train that is about to get faster and exponentially, all you really need to do is get on. Once you're in the car, you'll figure out everything else on the road. It's impossible to try to predict where that train will be and then shoot bullets at it, or predict where that train will be — it's accelerating exponentially every second — and then figure out which intersection to wait for it. Just get in while it's driving relatively slowly, and then go exponentially together.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">A lot of people think it just happened overnight. You have been working in this field for 35 years. I remember hearing Larry Page say about 2005 or 2006 that the ultimate state of Google is when machines are able to anticipate questions before you ask them and give you answers without you having to look them up. I heard Bill Gates say in 2016 that when someone said everything was done — we had the internet, cloud computing, mobile, social, and so on. He says, \"We haven't even started yet.\" Someone asks, \"Why do you say that?\" He says, \"We don't really start until machines go from being stupid calculators to starting to think for themselves, with us.\" That's the moment we are in.</p><p><p style=\"text-align: justify;\">Having leaders like you, like Sam and Elon, Satya and others, is such an extraordinary advantage for this country. And seeing the collaboration between the venture capital systems that we have — I'm involved in that, being able to provide venture capital to people.</p><p><p style=\"text-align: justify;\">These are truly an extraordinary time. But I also think, one of the things I'm grateful for is that we have leaders who also understand their responsibility — that we're creating change at an accelerating rate. We know that while this is likely to be good for the vast majority, there will be challenges on the road. We'll handle the challenges as they arise, raise the bottom line for everyone and make sure it's a win, not just for some of the elite in Silicon Valley. Don't scare them, take them with you. We'll do it.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">So, thank you.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Exactly.</p><p></body></html></p>","source":"wallstreetcn_hot_news","collect":0,"html":"<!DOCTYPE html>\n<html>\n<head>\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=utf-8\" />\n<meta name=\"viewport\" content=\"width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no\"/>\n<meta name=\"format-detection\" content=\"telephone=no,email=no,address=no\" />\n<title>On the competition to invest in OpenAI, the AI bubble, ASIC...Jensen Huang answers it all</title>\n<style type=\"text/css\">\na,abbr,acronym,address,applet,article,aside,audio,b,big,blockquote,body,canvas,caption,center,cite,code,dd,del,details,dfn,div,dl,dt,\nem,embed,fieldset,figcaption,figure,footer,form,h1,h2,h3,h4,h5,h6,header,hgroup,html,i,iframe,img,ins,kbd,label,legend,li,mark,menu,nav,\nobject,ol,output,p,pre,q,ruby,s,samp,section,small,span,strike,strong,sub,summary,sup,table,tbody,td,tfoot,th,thead,time,tr,tt,u,ul,var,video{ font:inherit;margin:0;padding:0;vertical-align:baseline;border:0 }\nbody{ font-size:16px; line-height:1.5; color:#999; background:transparent; }\n.wrapper{ overflow:hidden;word-break:break-all;padding:10px; }\nh1,h2{ font-weight:normal; line-height:1.35; margin-bottom:.6em; }\nh3,h4,h5,h6{ line-height:1.35; margin-bottom:1em; }\nh1{ font-size:24px; }\nh2{ font-size:20px; }\nh3{ font-size:18px; }\nh4{ font-size:16px; }\nh5{ font-size:14px; }\nh6{ font-size:12px; }\np,ul,ol,blockquote,dl,table{ margin:1.2em 0; }\nul,ol{ margin-left:2em; }\nul{ list-style:disc; }\nol{ list-style:decimal; }\nli,li p{ margin:10px 0;}\nimg{ max-width:100%;display:block;margin:0 auto 1em; }\nblockquote{ color:#B5B2B1; border-left:3px solid #aaa; padding:1em; }\nstrong,b{font-weight:bold;}\nem,i{font-style:italic;}\ntable{ width:100%;border-collapse:collapse;border-spacing:1px;margin:1em 0;font-size:.9em; }\nth,td{ padding:5px;text-align:left;border:1px solid #aaa; }\nth{ font-weight:bold;background:#5d5d5d; }\n.symbol-link{font-weight:bold;}\n/* header{ border-bottom:1px solid #494756; } */\n.title{ margin:0 0 8px;line-height:1.3;color:#ddd; }\n.meta {color:#5e5c6d;font-size:13px;margin:0 0 .5em; }\na{text-decoration:none; color:#2a4b87;}\n.meta .head { display: inline-block; overflow: hidden}\n.head .h-thumb { width: 30px; height: 30px; margin: 0; padding: 0; border-radius: 50%; float: left;}\n.head .h-content { margin: 0; padding: 0 0 0 9px; float: left;}\n.head .h-name {font-size: 13px; color: #eee; margin: 0;}\n.head .h-time {font-size: 12.5px; color: #7E829C; margin: 0;}\n.small {font-size: 12.5px; display: inline-block; transform: scale(0.9); -webkit-transform: scale(0.9); transform-origin: left; -webkit-transform-origin: left;}\n.smaller {font-size: 12.5px; display: inline-block; transform: scale(0.8); -webkit-transform: scale(0.8); transform-origin: left; -webkit-transform-origin: left;}\n.bt-text {font-size: 12px;margin: 1.5em 0 0 0}\n.bt-text p {margin: 0}\n</style>\n</head>\n<body>\n<div class=\"wrapper\">\n<header>\n<h2 class=\"title\">\nOn the competition to invest in OpenAI, the AI bubble, ASIC...Jensen Huang answers it all\n</h2>\n<h4 class=\"meta\">\n<p class=\"head\">\n<strong class=\"h-name small\">华尔街见闻</strong><span class=\"h-time small\">2025-09-27 13:30</span>\n</p>\n</h4>\n</header>\n<article>\n<p><html><head></head><body>Huang Renxun said that OpenAI is likely to become the next trillion-dollar company, and it's a pity that it didn't invest earlier. In the next five years, AI-driven revenue will increase from $100 billion to a trillion dollar level, and it may be reached now. Regarding the competition of ASIC, Nvidia said that even if competitors set the chip price at zero, customers still choose Nvidia because the system operating cost is lower.</p><p>Recently, Huang Renxun, founder and CEO of Nvidia, was a guest of \"Bg2 Pod\" bi-weekly dialogue program, and had an extensive conversation with hosts Brad Gerstne and Clark Tang.</p><p><p style=\"text-align: justify;\">During the dialogue, Huang Renxun talked about the US$100 billion cooperation with OpenAI, and expressed his views on topics such as AI competition pattern and sovereign AI prospects.</p><p><p style=\"text-align: justify;\">Huang Renxun said that AI competition is now fiercer than ever before,<strong>The market has evolved from a simple \"GPU\" to a complex and continuously evolving \"AI factory\".</strong>Diverse workloads and exponentially growing inference tasks need to be handled.</p><p><p style=\"text-align: justify;\">He predicts that if AI brings $10 trillion in added value to global GDP in the future,<strong>Then the annual capital expenditure of the AI factory behind it needs to reach $5 trillion.</strong></p><p><p style=\"text-align: justify;\">Talking about cooperation with OpenAI, Huang Renxun said,<strong>OpenAI is likely to become the next trillion-dollar hyperscale company. The only regret is that it didn't invest more sooner,</strong>\"All the money should be given to them\".</p><p><p style=\"text-align: justify;\">In terms of AI commercialization prospects, Huang Renxun predicts that,<strong>In the next five years, AI-driven revenue will increase from $100 billion to a trillion dollar level.</strong></p><p><p style=\"text-align: justify;\">Regarding the competition of ASIC, Nvidia said,<strong>Even if competitors set the chip price at zero, customers will still choose Nvidia because their systems are cheaper to operate.</strong></p><p><p style=\"text-align: justify;\">The following are the highlights of the conversation:</p><p><ul style=\"\"><li>OpenAI wants to establish a \"direct relationship\" with Nvidia similar to Musk and X, including a direct working relationship and a direct procurement relationship.</p><p></li><li>Assuming that AI brings $10 trillion in value-added to global GDP, and this $10 trillion in Token generation has a gross profit margin of 50%, then $5 trillion of it needs a factory and an AI infrastructure,<strong>So the reasonable annual capital expenditure for this plant is about $5 trillion.</strong></p><p></li><li><strong>10 GW would require about $400 billion in investment, and that $400 billion would largely need to be funded through OpenAI's offtake agreement, aka their exponentially growing revenue.</strong>This has to be financed through their capital, through equity financing, and debt that can be raised.</p><p></li><li><strong>The probability of AI-driven revenue increasing from $100 billion to $1 trillion in the next five years is almost certain</strong>And it has now almost been reached.</p><p></li><li><strong>The global shortage of computing power is not due to the shortage of GPUs, but because the orders of cloud service vendors often underestimate the future demand, resulting in Nvidia's long-term \"emergency production mode\".</strong></p><p></li><li>Huang Renxun said,<strong>Nvidia's only regret is that OpenAI invited us to invest early on, but we were too poor at the time to invest enough money, and I should have given them all my money.</strong></p><p></li><li><strong>Nvidia is likely to become the first company in the $10 trillion class.</strong>A decade ago, people said there could never be a trillion dollar company. Now there are 10.</p><p></li><li>AI is now more competitive than ever, but it is also more difficult than ever. That's because wafer costs are getting higher, which means you can't achieve an X-fold growth factor unless you do co-design at the extreme scale, says Huang.</p><p></li><li>Google has the advantage of being forward-thinking. They started TPU1 before everything started.<strong>When TPU becomes a big business, customer-owned tools will become the mainstream trend.</strong></p><p></li><li><strong>The competitive advantage of Nvidia's chips lies in the total cost of ownership (TCO).</strong>Huang says its competitors are building cheaper ASICs and can even set the price at zero. The goal is that even if they set the chip price at zero, you will still buy the Nvidia system because the total cost of running it is still more cost effective than buying the chip (land, power and infrastructure are already worth $15 billion).</p><p></li><li>Nvidia chips have twice as much performance or tokens per watt as other chips,<strong>While the performance per unit of energy consumption is also much higher, customers can generate twice as much revenue from their data centers. Who doesn't want twice as much income?</strong></p><p></li><li>Every country must build sovereign AI.<strong>No one needs an atomic bomb, but everyone needs AI.</strong></p><p></li><li>Just like when motors replaced labor and physical activity, now we have AI. AI supercomputing and AI factory will generate tokens to enhance human intelligence,<strong>Future AI accounts for roughly 55-65% of global GDP, which is about $50 trillion.</strong></p><p></li><li><strong>AI is not a zero-sum game.</strong>\"The more ideas I have, the more problems I imagine we can solve, the more jobs we create, the more jobs we create,\" Huang said.</p><p></li><li><strong>One of the things that will be really cool and will be addressed over the next 5 years is the convergence of AI with robotics.</strong></p><p></li><li><strong>Even regardless of the new opportunities created by AI, the mere fact that AI has changed the way things are done has tremendous value.</strong>It's like switching to electricity instead of kerosene lamps, and switching to jets instead of propeller aircraft.</p><p></li></ul><p style=\"text-align: justify;\">The following is a summary of the full text of Huang Renxun's dialogue, which is translated by AI tools, and Wall Street's insights have been deleted:</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Jensen, it's great to be back again, and of course my partner Clark Tang. I can't believe it – welcome to NVIDIA.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Oh, and beautiful glasses.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">They look really good on you. The problem is that everyone will want you to wear them all the time these days. They'll say, \"Where are the red glasses?\" I can testify about it.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">It has been over a year since we last recorded the show. More than 40% of revenue today comes from reasoning, but reasoning is about to change because of the advent of the chain of reasoning. It's about to grow a billion times, right? A million times, a billion times.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">right. This is exactly the part that most people haven't fully internalized yet. That's the industry we've been talking about. This is the Industrial Revolution. To be honest, it feels like you and I have continued the show every day since then. In AI time, this has been about a hundred years.</p><p><p style=\"text-align: justify;\">I recently re-watched the show and many of the things we discussed stood out. One of the most profound things for me was when you were slapping the table and emphasizing — remember when pre-training was at some sort of low tide and people were saying, \"God, pre-training is over, right? Pre-training is over. We're not going to continue. We're overbuilding.\" This was about a year and a half ago. You say that reasoning will not grow 100 times, 1000 times, but a billion times.</p><p><p style=\"text-align: justify;\">Which brings us to where we are today. You announced this huge deal. We should start there.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I underestimated it. Let me make it official. I reckon we have three scaling laws right now, right? We have pre-trained scaling laws. We have the post-training scaling law. Post-training is basically like AI exercises.</p><p><p style=\"text-align: justify;\">Yes, practice a skill until you master it. So it tries a variety of different approaches, and in order to do that, you have to reason. So now training and reasoning are integrated in reinforcement learning. It's very complicated. This is called post-training. Then the third is reasoning. The old way of reasoning is a one-off, right? But the new way of reasoning we endorse is thinking. So think before you answer.</p><p><p style=\"text-align: justify;\">Now you have three scaling laws. The longer you think about it, the better quality of the answers you get. In the process of thinking, you conduct research and examine some basic facts. You learn something, think a little more, learn a little more, and then generate the answer. Don't just generate it in the first place. So thinking, post-training, pre-training, we now have three scaling laws instead of one.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">You knew this last year, but what is your level of confidence this year in reasoning that will grow by a billion times and how far that will take the level of intelligence? Are you more confident this year than you were last year?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I'm more confident this year, and the reason is that looking at the agent system now, AI is no longer a language model, AI is a language model system, and they are all running concurrently, possibly using tools. Some of us use tools, some do research.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">There's a ton of developments across the industry at the moment, it's all about multimodality, and looking at all the video content that is being generated, it's really amazing technology. And that really brings us to the pivotal moment that everyone is talking about this week, which is the massive partnership with OpenAI Stargate that you announced a few days ago, where you are going to be a priority partner to invest $100 billion in the company over a period of time. They will build 10 gigawatt facilities, which could generate up to $400 billion in revenue for Nvidia if they use Nvidia products in those 10 gigawatt facilities. Please help us understand this partnership, what it means to you guys and why this investment is so reasonable for Nvidia.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">First, I'll answer the last question before elaborating on it.<strong>I think OpenAI is likely to be the next hyperscale company at the trillion dollar level.</strong></p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Why do you call it a hyperscale company? Hyperscale is like Meta is a hyperscale company and Google is a hyperscale company. They'll have consumer and enterprise services, and they're likely to be the next hyperscale company in the world's trillion dollar class. I think you'll agree with that.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I agree. If so, the opportunity to invest before they reach that goal, one of the smartest investments we can imagine. You have to invest in areas that you understand, and we happen to understand that area. So the opportunity to invest, the return on this money will be fantastic. So we really like this investment opportunity. We didn't have to invest, it wasn't necessary for us to invest, but they gave us the opportunity to invest, which was a great thing.</p><p><p style=\"text-align: justify;\">Now let me start at the beginning. We are working with OpenAI on several projects.<strong>The first project is the construction of Microsoft Azure.</strong>We will continue to do so, and this cooperation is progressing very well. We still have a few years of construction work to do, and there are hundreds of billions of dollars of work there alone.</p><p><p style=\"text-align: justify;\"><strong>The second is the construction of OCI,</strong>I think there are about five, six, seven gigawatts to be built. So we're building with OCI, OpenAI and SoftBank, and those projects are all contracted, and there's a lot of work to be done.<strong>And the third is Core Weave</strong>These are still in the context of OpenAI.</p><p><p style=\"text-align: justify;\">So the question is, what is this new partnership? This new partnership is about helping OpenAI, working with OpenAI to build their own AI infrastructure for them for the first time. This is our direct collaboration with OpenAI at the chip level, software level, system level, AI factory level to help them become a fully operational hyperscale company. This will continue for a while, and it will complement the two exponential increases they are experiencing.<strong>The first exponential growth is an exponential increase in the number of customers,</strong>The reason is that AI is getting better, use cases are improving, and almost every app is now connected to OpenAI, so they are experiencing exponential growth in usage.<strong>The second exponential growth is the calculated exponential growth per use.</strong>Now it's not just a reasoning, but thinking before answering.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">These two exponential increases compound their computing demand. So we have to build all these different projects. The last project is an addition to everything they've already announced, to everything we've already worked with them, and it will support this incredible exponential growth.</p><p><p style=\"text-align: justify;\">One of the interesting points you said is that in your opinion, they are likely to be trillion dollar level companies, which I think is a good investment. At the same time, they're building their own, and you're helping them build their own data centers. So far they have been outsourcing to Microsoft to build the data center and now they want to build their own full stack factory.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">They want…they want…they basically want to have a relationship with us like Elon and X.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I think that's a really important thing when you consider the advantages that Colossus has, they're building the full stack. That's hyperscale because if they don't use that capacity, they can sell it to someone else. Similarly, Stargate is building huge capacity. They think they'll use most of the capacity, but it also allows them to sell it to others. This sounds a lot like AWS, GCP, or Azure.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">They want to build the same direct relationship with us, both a direct working relationship and a direct purchasing relationship. Just like Zuckerberg and Meta's relationship with us, our direct partnerships with Sundar and Google, and our direct partnerships with Satya and Azure. They had reached a large enough scale to think it was time to start building these direct relationships. I am happy to support this, Satya knows about it and Larry knows, everyone is informed and everyone is very supportive.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I found an interesting phenomenon regarding the market landscape of Nvidia's accelerated computing. Oracle is building the $300 billion Colossus project, we know what governments are building, we know what hyperscale cloud service providers are building, Sam is talking about trillion level investment. But the 25 sell-side analysts on Wall Street covering our stock, looking at their consensus expectations, basically see our growth flattening from 2027, with only 8% growth from 2027 to 2030. The only job of these 25 people is to forecast Nvidia's growth rate.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Frankly, we're pretty comfortable with that. We regularly exceed market expectations without any problems. But there is this interesting disagreement. I hear these opinions every day on CNBC and Bloomberg. I think it involves some questioning about shortages leading to surpluses, and they don't believe in sustained growth. They said, \"Well, we recognize your performance in 2026, but in 2027, there may be an oversupply and you won't need that much.\"</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Interestingly, the consensus expectation is that this growth won't happen. We also developed forecasts for the company, taking all of this data into account. This allows me to see that even though we're two and a half years into the AI era, there's a huge divide between what Sam Altman, me, Sundar, Satya are saying and what Wall Street still believes.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I don't think it's a contradiction. Let me explain with three key points, hopefully to help everyone feel more confident in the future of Nvidia.</p><p><p style=\"text-align: justify;\">The first point is the perspective of physical laws, which is the most important point: the era of general computing is over, and the future is accelerated computing and AI computing. What needs to be considered is how many trillions of dollars of computing infrastructure around the world need to be upgraded. When they are updated, it will turn to accelerated calculations. Everyone agrees on this, all saying \"Yes, we totally agree, the age of general computing is over and Moore's Law is dead.” That means general computing will shift to accelerated computing. Our partnership with Intel is about realizing that general-purpose computing needs to converge with accelerated computing to create opportunities for them.</p><p><p style=\"text-align: justify;\">Second, the first application scenario of AI is actually ubiquitous, in search recommendation engines, shopping and other fields. The basic VLC infrastructure, which used to use CPUs for recommendations, is now turning to using GPUs for AI. This is the shift from classical computing to accelerated computing AI. Hyperscale computing is shifting from CPUs to accelerated computing and AI, which involves providing services to Meta, Google, ByteDance, Amazon and others to shift their traditional approach to hyperscale computing to AI, which represents a hundreds of billions of dollars market.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Considering platforms like TikTok, Meta, Google, and more, there may be 4 billion people worldwide whose demand workloads are driven by accelerated computing.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">So even without considering the new opportunities created by AI, just AI changing the way things are done is of great value. It's like switching to electricity instead of kerosene lamps, and switching to jets instead of propeller aircraft.</p><p><p style=\"text-align: justify;\">All I've been talking about so far are these fundamental shifts. And when you turn to AI and turn to accelerated computing, what new applications will emerge? That's all the AI-related opportunities we're talking about.</p><p><p style=\"text-align: justify;\">Simply put, just like when motors replaced labor and physical activity, now we have AI. These AI supercomputers and AI factories I mentioned will generate tokens to enhance human intelligence. Artificial intelligence accounts for roughly 55-65% of global GDP, which is about $50 trillion. That $50 trillion will be enhanced.</p><p><p style=\"text-align: justify;\">Let's start with a single person. Suppose I hire an employee with a $100,000 salary and then equip him with $10,000 AI. If this $10,000 AI could make this $100,000 employee 2x, 3x more productive, would I do that? Without hesitation. I'm doing this on everyone in the company right now, every software engineer, every chip designer has AI collaborating with them, 100% coverage. The result is that we are making better quality chips, growing in quantity, and increasing in speed. Our company grows faster, employs more people, is more productive, earns more, and is more profitable.</p><p><p style=\"text-align: justify;\">Now apply Nvidia's story to global GDP. What is likely to happen is that $50 trillion is enhanced by $10 trillion. That $10 trillion needs to run on machines.<strong>What makes AI different from the past is that in the past, software was pre-written and then run on the CPU and operated by people. But in the future, AI is going to generate tokens, and machines must generate tokens and think, so the software is always running. In the past, software was written once, but now software is actually written all the time and thinking all the time. In order for AI to think, it needs a factory.</strong></p><p><p style=\"text-align: justify;\">Suppose that this 10 trillion token generation has a gross profit margin of 50%, of which 5 trillion requires factories and AI infrastructure. If you told me that annual global capital expenditures are about $5 trillion, I would say that the math is reasonable. That's the future: move from Excel general-purpose computing to accelerated computing, replace all hyperscale servers with AI, and then enhance the human intelligence of global GDP.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">At present, we estimate that the annual revenue of this market is about 400 billion US dollars. So TAM is going to grow 4 to 5 times than it is now.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Last night, Alibaba's Eddie Woo said that between now and the end of this decade, they will increase data center power by 10 times. Nvidia's revenue is pretty much power related. He also said that token generation doubles every few months.<strong>This means that performance per watt must continue to grow exponentially. That's why Nvidia keeps breaking through in performance per watt, because in this future, revenue per watt is basically revenue.</strong></p><p><p style=\"text-align: justify;\">This hypothesis contains very interesting historical context. Since 2000, there has been basically no growth in GDP. Then the Industrial Revolution came and GDP accelerated. The digital revolution has come, and GDP has accelerated again. What you said, like Scott Besson said, he thinks next year we're going to have 4% GDP growth. What you mean is that world GDP growth will accelerate because now we give the world billions of colleagues who work for us. If GDP is the output of fixed labor and capital, it must accelerate growth.</p><p><p style=\"text-align: justify;\">Take a look at the changes brought about by AI technology. AI technology, including large language models and all AI agents, is creating a new AI agent industry. OpenAI is the fastest revenue-growing company in history, growing exponentially. AI itself is a fast-growing industry because AI needs the factories and infrastructure behind it. This industry is growing, my industry is growing, and the industries below my industry are also growing. Energy is growing, and this is a renaissance for the energy industry, with all the companies in the infrastructure ecosystem like nuclear, gas turbines and so on doing well.</p><p><p style=\"text-align: justify;\">These numbers have everyone talking about surpluses and bubbles. Zuckerberg said on the podcast last week that he thinks it's likely there will be a vacuum period at some point where Meta could overspend by $10 billion, but that doesn't matter because it's so important to the future of their business that it's a risk they have to take. It sounds a bit like a prisoner's dilemma, but these are very happy prisoners.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">We estimate $100 billion in AI revenue in 2026, excluding Meta, and excluding GPUs running recommendation engines, excluding other features like search. The hyperscale server industry already has a scale of trillions, and this industry will turn to AI. Skeptics will say we need to grow from $100 billion in AI revenue in 2026 to at least $1 trillion in AI revenue in 2030. You just mentioned $5 trillion when you talked about global GDP. If you analyze from the bottom up,<strong>Can you see AI-driven revenue growing from $100 billion to $1 trillion in the next 5 years? Are we growing that fast?</strong></p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes, I will also say that we have reached it. Because hyperscalers have moved from CPUs to AI, their entire revenue base is now AI-driven.</p><p><p style=\"text-align: justify;\">You can't do TikTok without AI, you can't do short YouTube videos, and you can't do these things without AI. The amazing work Meta's doing in customizing content and personalizing it, can't go without AI. Previously, these jobs were done by people, i.e. four choices were created beforehand and then selected through a recommendation engine. Now it becomes an unlimited number of choices generated by AI.</p><p><p style=\"text-align: justify;\">But these things are already happening, and we've transitioned from CPUs to GPUs, mostly for these recommendation engines, which have been fairly new in the last three or four years. When I meet Zach at Sigraph, he will tell you that they do get off to a late start with GPUs. Meta has been using GPUs for about two and a half years, which is fairly new. Doing searches on GPUs is absolutely new.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">So your argument is,<strong>The probability that we will have trillion dollars in AI revenue by 2030 is almost certain because we have almost reached it.</strong>Let's talk about the incremental part from now on. I just heard your top-down analysis about the percentage of global GDP while you were doing the bottom-up or top-down analysis. What do you think is the probability that we will run into an oversupply in the next three, four or five years?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">It's a distribution and we don't know the future. Until we completely convert all general-purpose computing to accelerated computing and AI, I think this is extremely unlikely. It will take years.</p><p><p style=\"text-align: justify;\">Until all recommendation engines are based on AI, until all content generation is based on AI, because consumer-facing content generation is very much recommendation systems and so on, all of which will be generated by AI. Until all classic hyperscale businesses turn to AI, everything from shopping to e-commerce, until everything turns around.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">But all this new construction, when we're talking about trillion dollars, we're investing for the future. Do you have an obligation to invest those funds even if you see a slowdown or some sort of oversupply coming? Or is this you waving the flag at the ecosystem and telling them to go build and if we see a slowdown, we can always reduce the level of investment?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Actually the opposite is true because we are at the end of the supply chain and we respond to demand. Now all the VCs will tell you that demand is in short supply,<strong>There is a computing shortage in the world, not because there is a shortage of GPUs in the world.</strong>If they give me an order, I will produce it. In the last few years, we've really opened up the supply chain, and we're ready for everything from wafer start to packaging, HBM memory and more. If it needs to be doubled, we will double. So the supply chain is ready.</p><p><p style=\"text-align: justify;\">Now we're just waiting for demand signals, and when cloud service providers, hyperscalers and our customers make annual plans and give us forecasts, we'll respond and produce accordingly. Of course,<strong>What happens now is that every forecast they provide us is wrong because they are under-forecasting so we are always in emergency mode.</strong>We've been in emergency mode for a few years now, and whatever forecast we receive is a significant increase over last year, but still not enough.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Satya seemed to back down a bit last year, with some calling him the adult in the room, suppressing those expectations. A few weeks ago he said that this year we have built 2 Gigabit and in the future we will accelerate. Do you see some traditional hyperscalers that might move a little slower than CoreWeave or Elon X, maybe a little slower than Stargate? It sounds like they're all more actively engaged right now.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Because the second exponential growth.</p><p><p style=\"text-align: justify;\">We've already experienced an exponential increase in AI adoption and engagement.<strong>The second exponential growth is the ability to reason.</strong>This is a conversation we had a year ago. We said a year ago that when you switch AI from one-off, memorizing answers to memorizing and generalizing (which is basically pre-trained), like remembering what 8 by 8 equals, this is one-off AI. A year ago, reasoning appeared, tool use appeared, and now you have thinking AI, which is 1 billion times more computational.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Certain hyperscale customers have internal workloads, and they have to migrate from general-purpose computing to accelerated computing anyway, so they build on and on. I think maybe some hyperscalers have different workloads, so they're not sure how quickly they can digest it, but now everyone has concluded that they're badly under-built.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">One of my favorite applications is traditional data processing, both structured and unstructured data. We will soon announce a very large program for accelerated data processing. Data processing represents the use of the vast majority of CPUs in the world today. It still runs entirely on the CPU. If you go to Databricks, it's mostly the CPU; Go to Snowflake, mostly CPU; Oracle's SQL processing, mainly CPU. Everyone is using the CPU for SQL structured data processing. In the future, these will turn to AI data processing.</p><p><p style=\"text-align: justify;\">This is a huge market that we are going to enter. But you need everything that NVIDIA does, you need acceleration layers and domain-specific data processing recipes, and we have to build that.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">When I opened CNBC yesterday, they were talking about an oversupply bubble; When I opened Bloomberg, it was talking about the revolving income issue.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">When someone questions our investment and business relationships with companies like OpenAI, I need to clarify a few points. First, revolving revenue arrangements often refer to companies entering into misleading transactions that artificially inflate revenue without any underlying economic substance. In other words, this is growth driven by financial engineering rather than customer demand. The typical case that everyone is citing is of course Cisco and Nortel in the last bubble 25 years ago.</p><p><p style=\"text-align: justify;\">When we or Microsoft or Amazon invest in companies that are also our big customers, such as we invest in OpenAI and OpenAI buys tens of billions of chips, I want to remind everyone that those analysts on platforms like Bloomberg are wrong to worry excessively about revolving revenue or round-trip transactions.</p><p><p style=\"text-align: justify;\">10 GW would require about $400 billion in investment, and that $400 billion would largely need to be funded through their offtake agreements, i.e. their exponentially growing revenue. This has to be financed through their capital, through equity financing, and debt that can be raised. These are three financing instruments. The equity and debt they are able to raise is related to the confidence they are able to sustain in their income. Smart investors and lenders consider all of these factors. Fundamentally, that's what they're going to do, it's their company, not my business. Of course, we must stay in close contact with them and ensure that our build supports their continued growth.</p><p><p style=\"text-align: justify;\">The income side has nothing to do with the investment side. The investment side is not tied to anything, it is an opportunity to invest in them. As we mentioned before, this could very well be the next multi-trillion dollar hyperscale company. Who wouldn't want to be an investor in it?<strong>My only regret is that they invited us early to invest. I remember those conversations where we were too poor to invest enough and I should have given them all my money.</strong></p><p><p style=\"text-align: justify;\">The reality is that if we don't do our job to keep up, if Rubin doesn't become a good chip, they can source other chips to put into these data centers. They have no obligation to have to use our chips. As mentioned earlier, we view this as an opportunistic equity investment.</p><p><p style=\"text-align: justify;\">We've also made some great investments, and I have to say it, we've invested in XAI, we've invested in Corewave, which are great and very smart.</p><p><p style=\"text-align: justify;\">Getting back to the root question, we are open and transparent about what we are doing. There's underlying economic substance here, and we're not simply sending revenue back and forth between the two companies. Someone pays for ChatGPT every month, and 1.5 billion monthly active users are using the product. Every business will either adopt this technology or it will die. Every sovereign state sees this as an existential threat to its national security and economic security, as does nuclear energy. What person, company, or country would say that intelligence is basically optional for us? It was fundamental to them. Intelligent automation, I have fully discussed the requirements issue.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">When it comes to system design, in 2024 we move to the annual release cycle, starting with Hopper. Then we did a massive upgrade that required a major data center makeover, launched Grace Blackwell. In 2025 and the second half of 2026, we will launch Vera Rubin. Ultra in 2027 and Fineman in 2028.</p><p><p style=\"text-align: justify;\">How is the shift to an annual release cycle progressing? What are the main objectives? Does AI help us execute the annual release cycle? The answer is yes. Without AI, Nvidia's speed, pace, and scale would be limited.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">The answer to that last question is yes. Without AI these days, it is simply impossible to build the products we build.</p><p><p style=\"text-align: justify;\">Why do you do this? Remember Eddie said on the earnings call, Satya said, Sam said, token generation rates are growing exponentially and customer usage is growing exponentially. I think they have about 800 million weekly active users. This comes less than two years after the release of ChatGPT. Each user is generating more tokens because they are using inference time inference.</p><p><p style=\"text-align: justify;\">First, because the token generation rate is growing at an incredible rate, two exponential levels stacked together, the token generation cost will continue to grow unless we improve performance at an incredible rate, because Moore's Law is dead, the transistor cost is basically the same every year, and the power consumption is basically the same. Between these two basic laws, unless we come up with new technologies to reduce costs, even if there is a slight difference in growth, how to make up for two exponential increases by giving someone a few percentage points off?</p><p><p style=\"text-align: justify;\"><strong>Therefore, we have to improve performance every year at a rate that keeps up with the exponential growth.</strong>From Kepler all the way to Hopper, it may be 100,000 times growth, which is the beginning of Nvidia's AI journey, 100,000 times in 10 years. Between Hopper and Blackwell, we achieved 30x growth in a year thanks to NVLink 72, then Rubin will get the X factor again and Fineman will get another X factor.</p><p><p style=\"text-align: justify;\">We do this because transistors don't help us much, Moore's law basically the density is growing but the performance is not. If so, one of the challenges we face is having to break down the whole issue at the system level, while changing each chip and all software stacks and all systems. This is the ultimate extreme co-design, no one has co-designed at this level before. We revolutionize CPUs, GPUs, network chips, NVLink extensions, Spectrum X scale-out.</p><p><p style=\"text-align: justify;\">Someone said \"Oh yeah, it's just Ethernet\". Spectrum X Ethernet isn't just Ethernet. People are starting to find out, oh my gosh, the X-fold factor is pretty incredible. Nvidia's Ethernet business, just Ethernet business, is the fastest growing Ethernet business in the world.</p><p><p style=\"text-align: justify;\">We need to scale up now and of course build bigger systems. We scale across multiple AI factories to connect them together. We carry out this work on an annual cycle. So we have now achieved exponential growth in technology. This allows our customers to reduce token costs, making these tokens smarter and smarter through pre-training, post-training, and thinking.<strong>The result is that as AIs become smarter, their use increases. When usage increases, they will grow exponentially.</strong></p><p><p style=\"text-align: justify;\">For those who may not be familiar, what is Extreme Collaborative Design? Extreme co-design means you have to optimize models, algorithms, systems, and chips at the same time. You have to innovate outside the box. Because Moore's Law says you just have to make the CPU faster and faster and everything will get faster. You're innovating inside the box, just making the chip faster. But if that chip can't go any faster, what are you gonna do? Innovate outside the box. Nvidia really changed things because we did two things. We invented CUDA, we invented the GPU, we invented the idea of large-scale collaborative design. That's why we're involved in all of these industries. We create all these libraries and collaborative designs.</p><p><p style=\"text-align: justify;\">First, the full-stack limit goes even beyond software and GPUs. Now it's switches and networks at the data center level, software in all of these switches and networks, network interface cards, scaling, scaling, optimizing in all of these aspects. Therefore, Blackwell has a 30-fold improvement over Hopper, and no Moore's Law can achieve this. This is the limit, and this comes from the extreme collaborative design. That's why Nvidia went into networking and switching, scaling and scaling, scaling across systems, building CPUs and GPUs and network interface cards.</p><p><p style=\"text-align: justify;\">That's why Nvidia is so rich in software and talent. We contribute more open source software to the world than almost any company except one, I think AI2 or something. We have such huge software resources, and this is only in AI. Don't forget computer graphics, digital biology, self-driving cars, the amount of software produced by our company is incredible, which allows us to do deep and extreme collaborative design.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I've heard from one of your competitors that yes, he does it because it helps reduce token generation costs, but at the same time your annual release cycle makes it nearly impossible for your competitors to keep up. The supply chain is locked in more because you give the supply chain three years of visibility. Now the supply chain has confidence in what they can build. Have you considered this?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Before you ask this question, think about this. In order for us to do hundreds of billions of dollars of AI infrastructure every year, think about how much capacity we had to start preparing a year ago. We're talking about building hundreds of billions of dollars in wafer startups and DRAM procurement.</p><p><p style=\"text-align: justify;\">This has now reached a scale that almost no company can keep up with.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">So do you think your competitive moat is bigger than it was three years ago?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes. First of all,<strong>The competition now is fiercer than ever, but it is also more difficult than ever. The reason I say this is because wafer costs are getting higher, which means you can't achieve an X-fold growth factor unless you do co-design at the extreme scale.</strong>Unless you develop six, seven, eight chips a year, which is remarkable. It's not about building an ASIC, it's about building an AI factory system. The system has a lot of chips, and they're all designed in tandem to provide the 10x factor that we get almost regularly.</p><p><p style=\"text-align: justify;\">First, collaborative design is the limit. Second, scale is the limit. When your customer deploys a gigawatt, that's 400,000 to 500,000 GPUs. Getting 500,000 GPUs working together is a miracle. Your customers take a huge risk on you to buy all of this. You have to ask yourself, what customer would place a $50 billion purchase order on a structure? On an unproven architecture, a new architecture. A brand new chip, you are excited about it, everyone is excited for you, you just showed off the first silicon wafer. Who will give you a $50 billion purchase order? Why would you start a $50 billion wafer for a chip that has just been taped out?</p><p><p style=\"text-align: justify;\">But with Nvidia, we can do that because our architecture has been proven. The scale of our clients is so incredible. Now the size of our supply chain is also incredible. Who would launch all of this for a company, pre-build all of this unless they knew Nvidia was able to deliver? They trust us to deliver to customers all over the world. They are willing to launch hundreds of billions of dollars at once. The scale is incredible.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">One of the biggest key debates and controversies in the world about this is the GPU vs ASIC issue, Google's TPU, Amazon's Trainium, and it seems from ARM to OpenAI to Anthropic are rumored to be building ASICs. Last year you said we're building systems, not chips, and you're driving performance through every part of the stack. You also say that many of these projects will likely never reach production scale. But given most projects, and the apparent success of Google's TPU, how do you see this evolving landscape today?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\"><strong>First, Google has the advantage of being forward-thinking. Remember, they started TPU1 before everything started.</strong>It's no different than a startup. You should create startups before the market grows. You shouldn't show up as a startup when the market reaches a trillion dollar size. This is the fallacy, and all VCs know this fallacy, that if you can take a few percentage points of market share in a large market, you can become a giant company. This is actually fundamentally wrong.</p><p><p style=\"text-align: justify;\">You should have 100% market share in a small industry, as Nvidia and TPU have done. It was just our two companies at the time, but you have to hope that the industry can really go big. You're creating an industry.</p><p><p style=\"text-align: justify;\">The Nvidia story illustrates this point. This is a challenge for companies building ASICs now. While it seems like the market is tempting right now, remember that this tempting market has evolved from a chip called a GPU to the AI factory I just described. You guys just saw that I announced a chip called CPX for context processing and diffusion video generation, which is a very specialized workload, but an important workload within a data center. I just mentioned the possibility of AI data processing processors because you need long-term memory and short-term memory. KV cache processing is very intensive. AI memory is a big problem. You want your AI to have a good memory, just handling the KV cache of the whole system is a very complicated thing and may require a specialized processor.</p><p><p style=\"text-align: justify;\">You can see that Nvidia's view is now not just GPUs anymore. Our view is to look at the entire AI infrastructure and what these great companies need to handle their diverse and ever-changing workloads.</p><p><p style=\"text-align: justify;\">Check out transformer. The transformer architecture is changing dramatically. If it wasn't for CUDA's ease of manipulation and iteration, how could they try a lot of experiments to decide which version of transformer to use, what kind of attention algorithm? How to break it down? CUDA helps you accomplish all of this because it is very programmable.</p><p><p style=\"text-align: justify;\">Now the way to think about our business is when all these ASIC companies or ASIC projects started three, four, five years ago, I have to tell you, that industry is very simple. A GPU is involved. But now it's massive and complex, and in two more years it's going to be completely massive. The scale will be enormous. So I think,<strong>The battle to enter a very large market as a new entrant is difficult. This is true even for customers who may be successful on ASICs.</strong></p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Investors tend to be dualistic creatures who just want a black and white answer of yes or no. But even if you get the ASIC working, isn't there an optimal balance? Because I think when buying the Nvidia platform, CPX will be rolled out for pre-filling, video generation, and possibly decoding, etc.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes, so there will be many different chips or parts added to the accelerated compute cluster of the Nvidia ecosystem, as new workloads are generated. People trying to tape out new chips now aren't really predicting what will happen a year from now, they're just trying to get the chips to work.</p><p><p style=\"text-align: justify;\">In other words, Google is a big customer for GPUs. Google is a very special case and we must show due respect. TPU has reached TPU7. It was also a challenge for them and the work they did was very difficult.</p><p><p style=\"text-align: justify;\">Let me explain, there are three classes of chips. The first category is architectural chips : X86 CPU, ARM CPU, Nvidia GPU. They are architectural, with ecosystems on them, architectures have rich IPs and ecosystems, and the technology is very complex and built by owners like us.</p><p><p style=\"text-align: justify;\">The second category is ASIC. I worked for LSI Logic, the original company that invented the ASIC concept. As you know, LSI Logic no longer exists. The reason is that ASICs are really great when the market size is not very large, it's easy to have someone as a contractor to help you pack everything up and make it on your behalf, and they charge a gross margin of 50-60%. But when the ASIC market got bigger, there was a new practice called customer-owned tools. Who would do that? Apple's smartphone chips. Apple's smartphone chips are so large that they will never pay others 50-60% gross profit margin to make ASICs. They use customer-owned tools.</p><p><p style=\"text-align: justify;\"><strong>So where does TPU go when it becomes big business? Customer-owned tools, no doubt.</strong></p><p><p style=\"text-align: justify;\">But ASIC has its place. Video transcoders are never too big, and smart network cards are never too big. I am not surprised when an ASIC company has 10, 12, 15 ASIC projects because there may be five smart NICs and four transcoders. Are they all AI chips? Of course not. If someone builds an embedded processor as an ASIC for a specific recommendation system, it can certainly be done. But would you use it as the foundational computing engine for AI that is always changing? You have low latency workloads, high throughput workloads, token generation for chat, thinking workloads, AI video generation workloads.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">You're talking about the backbone of accelerated computing.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">That's what Nvidia is all about.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Simply put, it's like the difference between playing chess and checkers. The truth is that companies that are starting to do ASICs today, whether it's Tranium or some other accelerator, they're building a chip that is just a component of a larger machine.</p><p><p style=\"text-align: justify;\">You built a very complex system, platform, factory, and now you're kind of open up a little bit. You mentioned the CPX GPU, and in some ways you are breaking down the workload into the best pieces of hardware in that particular area.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">We announced something called Dynamo, decomposed AI workload orchestration, and we opensourced it because the AI factory of the future is decomposed.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">You launched MV Fusion and even said to your competitors including Intel, the way you're involved in this factory that we're building, because no one else is crazy enough to try to build the whole factory, but if you have something that's good enough and compelling enough for the end user to say, \"We want to use this instead of an ARM GPU, or we want to use this instead of your inference accelerator,\" you can tap into it.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">We are thrilled to be able to build the connection. Convergence is really a really great idea, and we're excited to work with Intel on this. This requires leveraging Intel's ecosystem, on which most of the world's businesses still run. We merge the Intel ecosystem with the Nvidia AI ecosystem, accelerated computing. We do the same with ARM, and there are several other companies that we will also work with. This opens up opportunities for both of us and is a win-win for both. I will be their big customer and they will expose us to bigger market opportunities.</p><p><p style=\"text-align: justify;\">This goes hand in hand with a point you made that may have shocked some. You say our competitors are building ASICs and all their chips are already cheaper today, but they can even set the price at zero. The goal is that even if they set the chip price at zero, you will still buy the Nvidia system because the total cost of running the system – power, data center, land, etc, the intelligence that is output is still more cost effective than buying the chip, even if the chip is free. Because land, electricity and infrastructure are already worth $15 billion. We have carried out an analysis of this mathematical problem.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">But would you please explain your calculations for us, because I think it's really hard to understand for those who don't think about it very often. Your chips are so expensive,<strong>With competitors' chips priced at zero, how is it possible that yours is still a better choice?</strong></p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">There are two ways of thinking. One is to consider it from an income perspective. Everyone is limited by power consumption, assuming you are able to get an extra 2 gigawatts of power. Then those 2 gigawatts of power you want to be able to convert into revenue. If you have twice as much performance or tokens per watt as anyone else because you do deep and extreme code design,<strong>My performance per unit of energy consumption is much higher, then my customers can generate twice as much revenue from their data centers. Who doesn't want twice as much income?</strong></p><p><p style=\"text-align: justify;\">If someone gives them a 15% discount, the difference between our gross margin (about 75 percentage points) and everyone else's gross margin (about 50 to 65 percentage points) is not enough to make up for the 30x performance difference between Blackwell and Hopper. Let's say Hopper is a brilliant chip and system, let's say someone else's ASIC is Hopper. The Blackwell is 30 times more capable.</p><p><p style=\"text-align: justify;\">So in that 1 gigawatt, you have to give up 30 times your revenue. The price is too great. So even if they provide the chips for free, you only have 2 gigawatts to use, and the opportunity cost is extremely high. You'll always choose the best performance per watt.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">I heard from a CFO of a hyperscale cloud service provider that given the performance gains that your chip brings, precisely for this point of tokens per gigawatt, and power being the limiting factor, they have to upgrade to a new cycle. Will this trend continue as you look at Ruben, Ruben Ultra, Fineman?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">We now build six or seven chips a year, and each one is part of the system. System software is ubiquitous and needs to be integrated and optimized on all six or seven chips to achieve Blackwell's 30-fold performance improvement. Now imagine I do this every year, consistently. If you build an ASIC in this family of chips and we optimize it throughout the system, it's a difficult problem to solve.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Which brings me back to the question about the competitive moat at the beginning. We've been watching that, we're investors in the ecosystem and also in your competitors, from Google to Broadcom. But when I think about this from first principles, are you increasing or decreasing your competitive moat?</p><p><p style=\"text-align: justify;\">You move to an annual rhythm to co-develop with the supply chain. The scale is much larger than anyone expected, which requires a balance sheet and the scale of the development. Your initiatives through acquisitions and organic growth, including Envy Fusion, CPX, and more that we just discussed. All of these factors convince me that your competitive moat is strengthening, at least when it comes to building plants or systems. Which is at least surprising. But interestingly, your P/E is much lower than most other companies. I think part of this has to do with the law of large numbers. A $4.5 trillion company can't get any bigger. But I asked you that a year and a half ago, and you're sitting here today, if the market is going to grow to a 10x or 5x increase in AI workloads, we know what capex is doing and so on. In your opinion, are there any conceivable scenarios where your top line in 5 years won't be 2x or 3x higher than in 2025? Given these advantages, what is the probability that revenue won't actually be much higher than it is today?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I replied this way. As I described, our chances are much greater than the consensus. I'm here to say,<strong>I think Nvidia is likely to be the first $10 trillion company.</strong>I've been here long enough that just a decade ago, you should remember, people said there could never be a trillion dollar company. Now we have 10.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">The world is bigger today, back to exponential developments in GDP and growth.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">The world is bigger and people misunderstand what we do. They remember us as a chip company, we do make chips, we make the best chips in the world.</p><p><p style=\"text-align: justify;\">But Nvidia is actually an AI infrastructure company. We are your AI infrastructure partner, and our partnership with OpenAI is the perfect proof. We are their AI infrastructure partner, and we work with people in many different ways. We do not require anyone to purchase all products from us. We do not require them to purchase the entire rack. They can buy chips, components, our network equipment. We have customers who buy only our CPUs, only our GPUs and other people's CPUs and network equipment. We can sell any way you like. My only request is to buy something from us.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">We talked about Elon Musk with X.ai and the Colossus 2 project. As I mentioned before, it's not just about better models, we have to build. We must have world-class builders. And I think the top builder of our country is probably Elon Musk.</p><p><p style=\"text-align: justify;\">We talked about Colossus 1 and the work he did there, building a coherent cluster of hundreds of thousands of H100s and H200s. Now he's working on Colossus 2, which could contain half a million GPUs, the equivalent of millions of H100s in a coherent cluster. I wouldn't be surprised if he reaches the gigawatt level faster than anyone else. What are the advantages of being a builder who can both build software and models and understand what it takes to build these clusters?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">I would say that these AI supercomputers are complex systems. The technology is complex, and the procurement is complicated due to financing issues. Getting access to land, electricity and infrastructure, powering it is complicated, building and starting all of these systems. This is perhaps the most complex systemic problem humanity has undertaken to date.</p><p><p style=\"text-align: justify;\">Elon has the huge advantage of having in his mind all these systems working together with each other and all the interdependencies in one mind, including financing. So he's a big GPT, he's a big supercomputer in his own right, the ultimate GPU. He has a big advantage in this, and he has a strong sense of urgency and a genuine desire to build it. When will is combined with skill, the unthinkable happens. It is rather unique.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">Next I want to talk about sovereign AI as well as the global AI race. Thirty years ago, you couldn't imagine you'd communicate with the Crown Prince and King in the palace, frequenting the White House. The president says you and Nvidia are vital to U.S. national security.</p><p><p style=\"text-align: justify;\">When I look at the situation, it's hard for you to be in those venues if governments don't see it as at least as existential as we did with nuclear technology in the 1940s. While we don't have a government-funded Manhattan project today, it's funded by Nvidia, OpenAI, Meta, Google. We have companies the same size as the nation today. These companies are funding causes that presidents and kings believe are of existential significance for their future economies and national security. Do you agree with this view?</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\"><strong>No one needs an atomic bomb, but everyone needs AI. That's a very big difference.</strong>AI is modern software. That's where I started – from general-purpose computing to accelerated computing, from manual line-by-line code to AI-writing code. This foundation cannot be forgotten, we have reinvented computing. No new species emerged on Earth, we just reinvented computing. Everyone needs to calculate and it needs to be democratized.</p><p><p style=\"text-align: justify;\">This is why all countries are aware that they have to enter the AI world because everyone needs to keep up with computing. No one in the world will say, \"I was using computers yesterday, and tomorrow I'm going to go back to using sticks and fires.\" Everyone needs to move to computing, it's just modernizing.</p><p><p style=\"text-align: justify;\">First, to participate in AI, you have to code your history, culture, and values in AI. Of course, AI is getting smarter and smarter, and even core AI is able to learn these things fairly quickly. You don't have to start from scratch.</p><p><p style=\"text-align: justify;\">So I think every country needs to have some sovereign capabilities. I suggest that they all use OpenAI, Gemini, these open models, and Grok, and I suggest that they all use Anthropic. But they should also devote resources to learning how to build AI. The reason is that they need to learn how to build it, not just for language models, but also AI for industrial models, manufacturing models, national security models. There is a lot of intelligence that they need to cultivate themselves. So they should have sovereign capability and every country should develop it.</p><p><p style=\"text-align: justify;\">Is that the same thing you hear around the world? They are all aware of this. They will all be customers of OpenAI, Anthropic, Grok, and Gemini, but they also do need to build their own infrastructure. That's the important philosophy that Nvidia is doing — we're building infrastructure. Just as every country needs energy infrastructure and communications internet infrastructure, now every country needs AI infrastructure.</p><p><p style=\"text-align: justify;\">Let's start with the rest of the world. Our good friend David Sachs, the AI team did a fantastic job. We are very fortunate to have David and Shriram in Washington, D.C. David is in the job of serving as AISR, and what a smart move President Trump has made to put them in the White House.</p><p><p style=\"text-align: justify;\">At this critical time, the technology is complex. Shriram is the only person in Washington, D.C. that I think understands CUDA, which is strange though. But I just love the fact that at this critical time when technology is complex, policies are complex, and the impact on the future of our country is so significant, we have someone who is clear-minded, invested time in understanding technology, and thoughtfully helped us through it.</p><p><p style=\"text-align: justify;\"><strong>Technology, like corn and steel in the past, is now such a basic trade opportunity.</strong>It is an important part of trade. Why don't you want American technology to be desired by everyone so it can be used for trade?</p><p><p style=\"text-align: justify;\">Trump has done several things, and what has been done is really good for keeping everyone up to date. The first thing is the re-industrialization of the United States, encouraging companies to come to the United States to build, invest in factories, and retrain and upskill the skilled workforce, which is extremely valuable to our country. We love craft, I love people who make things with their hands, and now we're going back to building things, building magnificent incredible things, and I love that.</p><p><p style=\"text-align: justify;\">It will change America, no doubt. We must recognize that reindustrialization in the United States will be fundamentally transformative.</p><p><p style=\"text-align: justify;\">Then of course AI. It is the biggest equalizer. Think about how everyone can have AI now. This is the ultimate equalizer. We have bridged the technological divide. Remember the last time someone had to learn to use a computer for financial or professional benefit, they had to learn C + + or C, or at least Python. Now they just need to learn human language. If you don't know how to program AI, you tell AI, \"Hi, I don't know how to program AI. How do I program AI?\" AI will explain it to you or do it for you. It does it for you. That's incredible, isn't it? We are now bridging the technological divide with technology.</p><p><p style=\"text-align: justify;\">It is something that everyone has to be involved in. OpenAI has 800 million active users. Gosh, it really needs to get to 6 billion. It really needs to hit 8 billion soon. I think that's the number one point. And then point two, point three, I think AI is going to change the task.</p><p><p style=\"text-align: justify;\">What people get confused about is that there are many tasks that will be eliminated and there are many tasks that will actually be created. But it is likely that for many, their jobs are effectively protected.</p><p><p style=\"text-align: justify;\">For example, I've been using AI. You've been using AI. My analysts have been using AI. My engineers, each of them is using AI consistently. We are hiring more engineers, hiring more people, hiring across the board. The reason is that we have more ideas. We can pursue more ideas now. The reason is that our company has become more productive. Because we become more productive, we become richer. We get richer and we can hire more people to pursue these ideas.</p><p><p style=\"text-align: justify;\"><strong>The concept of AI bringing about massive job disruption begins with the premise that we don't have more ideas. It starts with the premise that we have nothing to do.</strong>Everything we do in our lives today is the end. If someone else does that one task for me, I am left with one task. Now I have to sit there waiting for something, waiting for retirement, sitting in my rocking chair. The idea doesn't make sense to me.</p><p><p style=\"text-align: justify;\"><strong>I think intelligence is not a zero sum game. The smarter the people around me, the more genius I have around me, and surprisingly, the more ideas I have, the more problems I imagine we can solve, the more jobs we create, the more jobs we create.</strong>I don't know what the world will be like in a million years, that will be left to my children. But my feeling over the next few decades is that the economy will grow. A lot of new jobs will be created. Every job will be changed. Some work will be lost. We're not going to ride in the streets, these things will all be nice. Humans are notoriously skeptical and bad at understanding composite systems, and they are even worse at understanding exponential systems that accelerate with scale.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">We talked a lot about indexes today. The great futurist Ray Kurzweil said that in the 21st century, we will not have a hundred years of progress. We may have twenty thousand years of progress.</p><p><p style=\"text-align: justify;\">You said earlier that we are blessed to live and contribute to this moment. I wouldn't ask you to look ahead 10, 20, or 30 years because I think it's too challenging. But when we think about 2030, things like robots, 30 years is easier than 2030. Really? Yes. Okay, so I'll give you permission to look ahead 30 years. When you think about the process, I like these shorter time frames because they have to combine bits and atoms, which is the hard part of building these things.</p><p><p style=\"text-align: justify;\">Everyone is saying this is about to happen, which is interesting but not entirely useful. But if we have 20,000 years of progress, think about that quote from Ray, think about exponential growth, and that all of our listeners — whether you're working in government, in a startup, or running a big company — need to think about the speed of accelerating change, the speed of accelerating growth, and how you can work in synergy with AI in this new world.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Many people have said a lot of things and they are all reasonable.<strong>I think one of the things that is really cool and that will be addressed over the next 5 years is the convergence of AI with robotics.</strong>We will have the AI that roams around us. Everyone knows that we will all grow with our own R2-D2. That R2-D2 will remember everything about us, guide us along the way, and be our partner. We already know this.</p><p><p style=\"text-align: justify;\">Everyone will have their own associated GPUs in the cloud, and 8 billion people correspond to 8 billion GPUs, which is a viable outcome. Everyone has a model tailored to themselves. That AI in the cloud will also be embodied in various places-in your car, in your own robot, it's everywhere. I think such a future is very reasonable.</p><p><p style=\"text-align: justify;\">We will understand the infinite complexity of biology, understand the biological system and how to predict it, have a digital twin for everyone. We have our own digital twin in healthcare just as we have a digital twin in Amazon shopping, why not have it in healthcare? Of course there will be. A system that predicts how we are aging, what diseases we might have, and whatever is coming – maybe next week or even tomorrow afternoon – and predicts ahead of time. Of course, we'll have all of that. I take all of this for granted.</p><p><p style=\"text-align: justify;\">I often get asked by the CEOs I work with, what happens now that you have all of that? What should you do? It is common sense for moving things quickly. If you have a train that is about to get faster and exponentially, all you really need to do is get on. Once you're in the car, you'll figure out everything else on the road. It's impossible to try to predict where that train will be and then shoot bullets at it, or predict where that train will be — it's accelerating exponentially every second — and then figure out which intersection to wait for it. Just get in while it's driving relatively slowly, and then go exponentially together.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">A lot of people think it just happened overnight. You have been working in this field for 35 years. I remember hearing Larry Page say about 2005 or 2006 that the ultimate state of Google is when machines are able to anticipate questions before you ask them and give you answers without you having to look them up. I heard Bill Gates say in 2016 that when someone said everything was done — we had the internet, cloud computing, mobile, social, and so on. He says, \"We haven't even started yet.\" Someone asks, \"Why do you say that?\" He says, \"We don't really start until machines go from being stupid calculators to starting to think for themselves, with us.\" That's the moment we are in.</p><p><p style=\"text-align: justify;\">Having leaders like you, like Sam and Elon, Satya and others, is such an extraordinary advantage for this country. And seeing the collaboration between the venture capital systems that we have — I'm involved in that, being able to provide venture capital to people.</p><p><p style=\"text-align: justify;\">These are truly an extraordinary time. But I also think, one of the things I'm grateful for is that we have leaders who also understand their responsibility — that we're creating change at an accelerating rate. We know that while this is likely to be good for the vast majority, there will be challenges on the road. We'll handle the challenges as they arise, raise the bottom line for everyone and make sure it's a win, not just for some of the elite in Silicon Valley. Don't scare them, take them with you. We'll do it.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Yes.</p><p><p style=\"text-align: justify;\"><strong>Moderator:</strong></p><p><p style=\"text-align: justify;\">So, thank you.</p><p><p style=\"text-align: justify;\"><strong>Huang Renxun:</strong></p><p><p style=\"text-align: justify;\">Exactly.</p><p></body></html></p>\n<div class=\"bt-text\">\n\n\n<p> source:<a href=\"https://wallstreetcn.com/articles/3756345\">华尔街见闻</a></p>\n\n\n</div>\n</article>\n</div>\n</body>\n</html>\n","type":0,"thumbnail":"https://static.tigerbbs.com/872b31f381addcccac3697320c31c1de","relate_stocks":{"LU2602419157.SGD":"HSBC ISLAMIC GLOBAL EQUITY INDEX \"AC\" (SGD) ACC","LU0823434740.USD":"BNP PARIBAS US GROWTH \"C\" (USD) INC","LU0158827948.USD":"ALLIANZ GLOBAL SUSTAINABILITY \"A\" (USD) INC","LU2505996681.GBP":"WELLINGTON MULTI-ASSET HIGH INCOME \"AM4H\" (GBPHDG) INC","IE0005OL40V9.USD":"JANUS HENDERSON BALANCED \"A6M\" (USD) INC","LU1923623000.USD":"Natixis Thematics AI & Robotics Fund R/A USD","LU1861219969.SGD":"Blackrock Future of Transport A2 SGD-H","IE00BMPRXN33.USD":"NEUBERGER BERMAN 5G CONNECTIVITY \"A\" (USD) ACC","LU0081259029.USD":"UBS (LUX) EQUITY FUND - TECH OPPORTUNITY \"P\" (USD) ACC","SG9999000418.SGD":"Aberdeen Standard Global Technology SGD","LU2249611893.SGD":"BNP PARIBAS ENERGY TRANSITION \"CRH\" (SGD) ACC","LU0289961442.SGD":"SUSTAINABLE GLOBAL THEMATIC PORTFOLIO \"AX\" (SGD) ACC","LU2360107325.USD":"BGF FUTURE OF TRANSPORT \"A4\" (USD) INC","IE00BJJMRX11.SGD":"Janus Henderson Balanced A Acc SGD","BK4608":"AI应用概念股","LU2298322129.HKD":"BGF SUSTAINABLE ENERGY \"A2\" (HKDHDG) ACC","IE0034235303.USD":"PINEBRIDGE US RESEARCH ENHANCED CORE EQUITY \"A\" (USD) ACC","NVDA":"英伟达","LU0472753341.HKD":"SUSTAINABLE GLOBAL THEMATIC PORTFOLIO \"A\" (HKD) ACC","BK4598":"佩洛西持仓","LU0354030511.USD":"ALLSPRING U.S. LARGE CAP GROWTH \"I\" (USD) ACC","IE00B4JS1V06.HKD":"JANUS HENDERSON BALANCED \"A2\" (HKD) ACC","LU0107464264.USD":"abrdn SICAV I - GLOBAL INNOVATION EQUITY \"A\" (USD) ACC","LU0225283273.USD":"SCHRODER ISF GLOBAL EQUITY ALPHA \"A\" (USD) ACC","LU0211327993.USD":"TEMPLETON GLOBAL EQUITY INCOME \"A\" (USD) ACC","LU1935042488.USD":"MANULIFE GF GLOBAL MULTI-ASSET DIVERSIFIED INCOME \"AA\" (USD) INC","LU2592432038.USD":"WELLINGTON MULTI-ASSET HIGH INCOME \"A\" (USD) ACC","LU2092937148.SGD":"Blackrock ESG Multi-Asset A8 SGD-H","LU0823434583.USD":"BNP PARIBAS US GROWTH \"C\" (USD) ACC","LU2458330243.SGD":"FRANKLIN SHARIAH TECHNOLOGY \"A-H1\" (SGDHDG) ACC","LU1951198990.SGD":"Natixis Thematics AI & Robotics Fund H-R/A SGD-H","IE0004445239.USD":"JANUS HENDERSON US FORTY \"A2\" (USD) ACC","LU2092627202.USD":"Blackrock ESG Multi-Asset A8 USD-H","LU0203202063.USD":"AB SICAV I - ALL MARKET INCOME PORTFOLIO \"A2X\" (USD) ACC"},"source_url":"https://wallstreetcn.com/articles/3756345","is_english":false,"share_image_url":"https://static.laohu8.com/e9f99090a1c2ed51c021029395664489","article_id":"2570746885","content_text":"黄仁勋表示,OpenAI很可能会成为下一个万亿美元级别的公司,很遗憾没有早点投资。未来5年内,AI驱动的收入将从1000亿美元增至万亿美元级别,现在就可能达到了。关于ASIC的竞争,英伟达放话,即使竞争对手将芯片价格定为零,客户仍然选择英伟达,因为系统运营成本更低。近日,英伟达创始人兼CEO黄仁勋做客「Bg2 Pod」双周对话节目,与主持人Brad Gerstne和Clark Tang进行了一场广泛的对话。对谈中,黄仁勋谈及了和OpenAI价值1000亿美元的合作,并就AI竞赛格局、主权AI前景等主题发表了自己的看法。黄仁勋表示,现在的AI竞争比以往任何时候都激烈,市场已从简单的“GPU”演变为复杂的、持续进化的“AI工厂”,需要处理多样化的工作负载和呈指数级增长的推理任务。他预计,如果未来AI为全球GDP带来10万亿美元的增值,那么背后的AI工厂每年的资本支出需要达到5万亿美元级别。谈及和OpenAI的合作,黄仁勋表示,OpenAI很可能会成为下一个万亿美元级别的超大规模公司,唯一的遗憾是没有早点多投资一些,“应该把所有钱都给他们”。在AI商业化前景方面,黄仁勋预计,未来5年内,AI驱动的收入将从1000亿美元增至万亿美元级别。关于ASIC的竞争,英伟达放话,即使竞争对手将芯片价格定为零,客户仍然会选择英伟达,因为他们的系统运营成本更低。以下为对谈的亮点内容:OpenAI想和英伟达建立起类似于马斯克和X那样的“直接关系”,包括直接的工作关系和直接的采购关系。假设AI为全球GDP带来10万亿美元的增值,而这10万亿美元的Token生成有50%的毛利率,那么其中5万亿美元需要一个工厂,需要一个AI基础设施,所以这个工厂每年的合理资本支出大约是5万亿美元。10吉瓦大约需要4000亿美元的投资,这4000亿美元很大程度上需要通过OpenAI的承购协议来资助,也就是他们指数级增长的收入。这必须通过他们的资本、通过股权融资和能够筹集的债务来资助。未来5年内,AI驱动收入从1000亿美元增至1万亿美元的的概率几乎是确定的,而且现在几乎已经达到了。全球的算力短缺不是因为GPU短缺,而是因为云服务厂商的订单往往低估了未来需求,导致英伟达长期处于“紧急生产模式”。黄仁勋表示,英伟达唯一的遗憾是,OpenAI很早就邀请我们投资,但我们当时太穷了,没有投资足够多的资金,我应该把所有钱都给他们。英伟达很可能成为第一家10万亿美元级别的公司。十年前,人们说永远不可能有万亿美元的公司。现在有了10家。现在的AI竞争比以往任何时候都激烈,但也比以往任何时候都困难。黄仁勋表示,这是因为晶圆成本越来越高,这意味着除非你进行极限规模的协同设计,否则你就无法实现X倍增长因子。谷歌拥有的优势是前瞻性。他们在一切开始之前就启动了TPU1。当TPU成为一门大生意后,客户自有工具将成为主流趋势。英伟达芯片的竞争优势在于总拥有成本(TCO)。黄仁勋表示,其竞争对手在构建更便宜的ASIC,甚至可以将价格定为零。我们的目标是,即使他们将芯片价格定为零,您仍然会购买英伟达系统,因为运营该系统的总成本仍然比购买芯片更划算(土地、电力和基础设施已经价值150亿美元)。英伟达芯片的性能或每瓦token数是其他芯片的两倍,虽然每单位能耗性能也要高得多,但客户可以从他们的数据中心产生两倍的收入。谁不想要两倍的收入呢?每个国家都必须建设主权AI。没有人需要原子弹,但每个人都需要AI。就像当初马达取代劳动力和体力活动一样,现在我们有了AI。AI超算、AI工厂,将会生成token来增强人类智能,未来人工智能大约占全球GDP的55-65%,也就是约50万亿美元。人工智能不是零和游戏。黄仁勋表示,“我的想法就越多,我想象我们可以解决的问题就越多,我们创造的工作就越多,我们创造的就业机会就越多”。在接下来的5年里,真正酷且将被解决的事情之一是人工智能与机器人技术的融合。即使不考虑AI创造的新机会,仅仅是AI改变了做事方式就有巨大价值。这就像不再使用煤油灯而改用电力,不再使用螺旋桨飞机而改用喷气式飞机一样。以下为黄仁勋对谈全文纪要整理,由AI工具辅助翻译,华尔街见闻有所删减:主持人:Jensen,很高兴再次回来,当然还有我的合作伙伴Clark Tang。我简直不敢相信——欢迎来到英伟达。黄仁勋:哦,还有漂亮的眼镜。主持人:它们在你身上看起来真的很好。问题是现在每个人都会希望你一直戴着它们。他们会说:\"红色眼镜在哪里?\"我可以为此作证。黄仁勋:距离我们上次录制节目已经超过一年了。如今超过40%的收入来自推理,但推理即将发生变化,因为推理链的出现。它即将增长十亿倍,对吧?一百万倍,十亿倍。主持人:没错。这正是大多数人还没有完全内化的部分。这就是我们一直在谈论的行业。这是工业革命。说实话,感觉就像你和我从那时起每天都在继续这个节目。在AI时间里,这已经过了大约一百年了。我最近重新观看了这个节目,我们讨论的许多事情都很突出。对我来说最深刻的一点是你当时拍桌子强调——记得当时预训练处于某种低潮期,人们说:\"天哪,预训练结束了,对吧?预训练结束了。我们不会继续了。我们过度建设了。\"这是大约一年半前的事。你说推理不会增长100倍、1000倍,而是会增长10亿倍。这将我们带到了今天的情况。你宣布了这个巨大的交易。我们应该从那里开始。黄仁勋:我低估了。让我正式声明。我估计我们现在有三个缩放定律,对吧?我们有预训练缩放定律。我们有后训练缩放定律。后训练基本上就像AI练习。是的,练习一项技能直到掌握它为止。所以它尝试各种不同的方法,为了做到这一点,你必须进行推理。所以现在训练和推理在强化学习中整合在一起了。非常复杂。这就是所谓的后训练。然后第三个是推理。旧的推理方式是一次性的,对吧?但是我们认可的新推理方式是思考。所以在回答之前先思考。现在你有了三个缩放定律。你思考得越久,得到的答案质量就越好。在思考的过程中,你进行研究,检查一些基本事实。你学习一些东西,再思考一些,再学习一些,然后生成答案。不要一开始就直接生成。所以思考、后训练、预训练,我们现在有三个缩放定律,而不是一个。主持人:你去年就知道这一点,但你今年对推理将增长10亿倍以及这将把智能水平带到何种程度的信心水平如何?你今年比去年更有信心吗?黄仁勋:我今年更有信心了,原因是现在看看智能体系统,AI不再是一个语言模型,AI是一个语言模型系统,它们都在并发运行,可能使用工具。我们中的一些人使用工具,一些人进行研究。主持人:目前整个行业有大量的发展,都是关于多模态的,看看所有正在生成的视频内容,这真是令人惊叹的技术。这确实把我们带到了本周大家都在讨论的关键时刻,就是你几天前宣布的与OpenAI Stargate的大规模合作,你们将成为优先合作伙伴,在一段时间内向该公司投资一千亿美元。他们将建设10个千兆瓦的设施,如果他们在这10个千兆瓦设施中使用英伟达的产品,这可能为英伟达带来高达4000亿美元的收入。请帮我们理解一下这个合作关系,它对你们意味着什么,为什么这项投资对英伟达来说如此合理。黄仁勋:首先,我先回答最后一个问题,然后再详细说明。我认为OpenAI很可能会成为下一个万亿美元级别的超大规模公司。主持人:为什么你称它为超大规模公司?超大规模就像Meta是超大规模公司,谷歌是超大规模公司一样。他们将拥有消费者和企业服务,他们很可能成为世界上下一个万亿美元级别的超大规模公司。我认为你会同意这个观点。黄仁勋:我同意。如果是这样的话,在他们达到那个目标之前投资的机会,这是我们能想象到的最明智的投资之一。你必须投资你了解的领域,而我们恰好了解这个领域。因此投资的机会,这笔钱的回报将是非常棒的。所以我们很喜欢这个投资机会。我们不是必须投资,这不是我们投资的必要条件,但他们给了我们投资的机会,这是很棒的事情。现在让我从头开始说。我们正在与OpenAI在几个项目上合作。第一个项目是Microsoft Azure的建设。我们将继续这样做,这个合作进展得非常好。我们还有几年的建设工作要做,仅在那里就有数千亿美元的工作。第二个是OCI的建设,我认为大约有五、六、七千兆瓦即将建设。所以我们正在与OCI、OpenAI和软银合作建设,这些项目都已签约,有很多工作要做。第三个是Core Weave,这些都还是在OpenAI的背景下。那么问题是,这个新的合作关系是什么?这个新的合作关系是关于帮助OpenAI,与OpenAI合作,首次为他们建设自己的AI基础设施。这是我们直接与OpenAI在芯片层面、软件层面、系统层面、AI工厂层面的合作,帮助他们成为一个完全运营的超大规模公司。这将持续一段时间,它将补充他们正在经历的两个指数增长。第一个指数增长是客户数量的指数增长,原因是AI正在变得更好,用例正在改善,几乎每个应用程序现在都连接到OpenAI,所以他们正在经历使用量的指数增长。第二个指数增长是每次使用的计算指数增长。现在不仅仅是一次推理,而是在回答之前进行思考。主持人:这两个指数增长使他们的计算需求复合增长。所以我们必须建设所有这些不同的项目。最后一个项目是在他们已经宣布的所有内容之上的补充,是在我们已经与他们合作的所有事情之上的补充,它将支持这种难以置信的指数增长。你说的一个很有趣的点是,在你看来,他们很可能成为万亿美元级别的公司,我认为这是一个很好的投资。同时,他们正在自建,你们正在帮助他们自建数据中心。到目前为止,他们一直外包给微软来建设数据中心,现在他们想要自己建设全栈工厂。黄仁勋:他们想要……他们想要……他们基本上想和我们建立像埃隆和X那样的关系。主持人:我认为这是一个非常重要的事情,当你考虑Colossus所拥有的优势时,他们正在建设全栈。这就是超大规模,因为如果他们不使用这些容量,他们可以卖给其他人。同样地,Stargate正在建设巨大的容量。他们认为自己会使用大部分容量,但这也让他们能够将其出售给其他人。这听起来很像AWS、GCP或Azure。黄仁勋:他们希望与我们建立同样的直接关系,包括直接的工作关系和直接的采购关系。就像扎克伯格和Meta与我们的关系一样,我们与Sundar和Google的直接合作关系,以及我们与Satya和Azure的直接合作关系。他们已经达到了足够大的规模,认为是时候开始建立这些直接关系了。我很高兴支持这一点,Satya知道这件事,Larry也知道,每个人都了解情况,每个人都非常支持。主持人:关于Nvidia加速计算的市场前景,我发现了一个有趣的现象。Oracle正在建设价值3000亿美元的Colossus项目,我们知道各国政府在建设什么,我们知道超大规模云服务商在建设什么,Sam谈论的是万亿级别的投资。但华尔街25位覆盖我们股票的卖方分析师,从他们的一致预期来看,基本上认为我们的增长从2027年开始会趋于平缓,2027年到2030年只有8%的增长。这25个人的唯一工作就是预测Nvidia的增长率。黄仁勋:坦率地说,我们对此很坦然。我们定期超越市场预期没有任何问题。但确实存在这种有趣的分歧。我每天都在CNBC和彭博社听到这些观点。我认为这涉及到一些关于短缺会导致过剩的质疑,他们不相信持续增长。他们说:\"好吧,我们认可你们2026年的表现,但2027年,可能会供过于求,你们不会需要那么多。\"主持人:有趣的是,一致预期认为这种增长不会发生。我们也制定了公司的预测,考虑了所有这些数据。这让我看到,即使我们已经进入AI时代两年半了,Sam Altman、我、Sundar、Satya所说的话与华尔街仍然相信的之间存在巨大分歧。黄仁勋:我认为这并不矛盾。让我用三个要点来解释,希望能帮助大家对Nvidia的未来更有信心。第一点是物理定律层面的观点,这是最重要的一点:通用计算时代已经结束,未来是加速计算和AI计算。需要考虑的是,全世界有多少万亿美元的计算基础设施需要更新换代。当它们更新时,将转向加速计算。每个人都同意这一点,都说\"是的,我们完全同意,通用计算时代结束了,摩尔定律已死。\"这意味着通用计算将转向加速计算。我们与英特尔的合作就是认识到通用计算需要与加速计算融合,为他们创造机会。第二点,AI的第一个应用场景实际上已经无处不在,在搜索推荐引擎、购物等领域。基础的超大规模计算基础设施过去使用CPU做推荐,现在正转向使用GPU做AI。这就是经典计算向加速计算AI的转变。超大规模计算正从CPU转向加速计算和AI,这涉及向Meta、Google、字节跳动、亚马逊等公司提供服务,将他们传统的超大规模计算方式转向AI,这代表着数千亿美元的市场。主持人:考虑到抖音、Meta、Google等平台,全球可能有40亿人的需求工作负载由加速计算驱动。黄仁勋:所以即使不考虑AI创造的新机会,仅仅是AI改变了做事方式就有巨大价值。这就像不再使用煤油灯而改用电力,不再使用螺旋桨飞机而改用喷气式飞机一样。到目前为止我谈论的都是这些基础性的转变。而当你转向AI,转向加速计算时,会出现什么新的应用?这就是我们正在讨论的所有AI相关的机会。简单来说,就像当初马达取代劳动力和体力活动一样,现在我们有了AI。我说的这些AI超级计算机、AI工厂,将会生成代币来增强人类智能。人工智能大约占全球GDP的55-65%,也就是约50万亿美元。这50万亿美元将得到增强。让我们从单个人说起。假设我雇佣一个10万美元薪资的员工,然后为他配备1万美元的AI。如果这1万美元的AI能让这个10万美元的员工提高2倍、3倍的生产力,我会这么做吗?毫不犹豫。我现在正在公司的每个人身上这样做,每个软件工程师、每个芯片设计师都有AI与他们协作,100%覆盖。结果是,我们制造的芯片质量更好,数量在增长,速度也在提升。我们公司发展更快,雇佣更多人员,生产力更高,收入更高,盈利能力更强。现在把英伟达的故事应用到全球GDP上。很可能发生的是,那50万亿美元被10万亿美元增强。这10万亿美元需要在机器上运行。AI与过去不同的地方在于,过去软件是预先编写好的,然后在CPU上运行,由人操作。但未来AI要生成代币,机器必须生成代币并进行思考,所以软件一直在运行。过去软件写一次就行,现在软件实际上一直在编写,一直在思考。为了让AI思考,它需要一个工厂。假设这10万亿代币生成有50%的毛利率,其中5万亿需要工厂,需要AI基础设施。如果你告诉我全球年度资本支出约为5万亿美元,我会说这个数学计算是合理的。这就是未来:从Excel通用计算转向加速计算,用AI替换所有超大规模服务器,然后增强全球GDP的人类智能。主持人:目前这个市场我们估计年收入约4000亿美元。所以TAM比现在要增长4到5倍。黄仁勋:昨晚阿里巴巴的Eddie Woo(吴泳铭)说,从现在到这个十年末,他们将把数据中心功率增加10倍。英伟达的收入几乎与功率相关。他还说代币生成每几个月翻一番。这意味着每瓦性能必须持续指数级增长。这就是为什么英伟达在每瓦性能上不断突破,因为在这个未来,每瓦收入基本上就是收入。这个假设包含了非常有趣的历史背景。2000年来,GDP基本没有增长。然后工业革命来了,GDP加速增长。数字革命来了,GDP又加速增长。你所说的,就像Scott Besson说的,他认为明年我们将有4%的GDP增长。你的意思是世界GDP增长将加速,因为现在我们给世界提供了数十亿个为我们工作的同事。如果GDP是固定劳动力和资本的产出量,它必须加速增长。看看AI技术带来的变化。AI技术,包括大语言模型和所有AI代理,正在创造一个新的AI代理产业。OpenAI是历史上收入增长最快的公司,呈指数级增长。AI本身就是一个快速增长的产业,因为AI需要背后的工厂和基础设施。这个产业在增长,我的产业在增长,我产业下面的产业也在增长。能源在增长,这对能源产业来说是一次复兴,核能、燃气轮机等基础设施生态系统中的所有公司都表现出色。这些数字让大家都在谈论过剩和泡沫。扎克伯格上周在播客中说,他认为很可能在某个时候会出现真空期,Meta可能会超支100亿美元,但这无关紧要,因为这对他们业务的未来如此重要,这是他们必须承担的风险。这听起来有点像囚徒困境,但这些是非常快乐的囚徒。主持人:我们估计2026年将有1000亿美元的AI收入,不包括Meta,也不包括运行推荐引擎的GPU,不包括搜索等其他功能。超大规模服务器产业已经有数万亿规模,这个产业将转向AI。怀疑者会说我们需要从2026年的1000亿美元AI收入增长到2030年的至少1万亿美元AI收入。你刚才谈到全球GDP时提到了5万亿美元。如果自下而上分析,你能看到在未来5年内从1000亿美元增长到1万亿美元的AI驱动收入吗?我们增长得有那么快吗?黄仁勋:是的,我还要说我们已经达到了。因为超大规模服务商已经从CPU转向AI,他们的整个收入基础现在都是AI驱动的。没有人工智能就无法做TikTok,也无法做YouTube短视频,没有AI就无法做这些事情。Meta在定制内容和个性化内容方面所做的惊人工作,都无法离开人工智能。以前这些工作都是由人来完成的,即预先创建四个选择,然后通过推荐引擎进行选择。现在变成了由AI生成无限数量的选择。但这些事情已经在发生了,我们已经从CPU过渡到GPU,主要是为了这些推荐引擎,这在过去三四年里是相当新的。我在Sigraph见到扎克时,他会告诉你他们在使用GPU方面确实起步较晚。Meta使用GPU大概也就两年半的时间,这是相当新的。在GPU上进行搜索绝对是全新的。主持人:所以你的论点是,到2030年我们拥有万亿美元AI收入的概率几乎是确定的,因为我们几乎已经达到了。让我们谈论一下从现在开始的增量部分。在你进行自下而上或自上而下分析时,我刚听到你关于全球GDP百分比的自上而下分析。你认为我们在未来三、四或五年内遇到供应过剩的概率是多少?黄仁勋:这是一个分布,我们不知道未来。在我们完全将所有通用计算转换为加速计算和AI之前,我认为这种可能性极低。这需要几年时间。直到所有推荐引擎都基于AI,直到所有内容生成都基于AI,因为面向消费者的内容生成在很大程度上就是推荐系统等等,所有这些都将由AI生成。直到所有经典的超大规模业务都转向AI,从购物到电子商务等一切,直到一切都转过去。主持人:但所有这些新建设,当我们谈论万亿美元时,我们正在为未来投资。即使你看到放缓或某种供应过剩即将到来,你是否有义务投资这些资金?还是说这是你向生态系统挥舞旗帜,告诉他们去建设,如果我们看到放缓,我们总可以减少投资水平?黄仁勋:实际上情况相反,因为我们处于供应链的末端,我们响应需求。现在所有风投都会告诉你们,需求供不应求,世界上存在计算短缺,不是因为世界上GPU短缺。如果他们给我订单,我就会生产。在过去几年里,我们真正打通了供应链,从晶圆开始到封装、HBM内存等所有技术,我们都已经准备就绪。如果需要翻倍,我们就翻倍。所以供应链已经准备好了。现在我们只是在等待需求信号,当云服务提供商、超大规模厂商和我们的客户制定年度计划并给我们预测时,我们会响应并据此生产。当然,现在发生的情况是,他们提供给我们的每一个预测都是错误的,因为他们预测不足,所以我们总是处于紧急模式。我们已经处于紧急模式几年了,无论我们收到什么预测,都比去年有显著增长,但还是不够。主持人:萨蒂亚去年似乎有所退缩,有人称他为房间里的成年人,在抑制这些期望。几周前他说今年我们已经建了2千兆,未来我们将加速。你是否看到一些传统的超大规模厂商,他们可能比CoreWeave或Elon X动作慢一些,也许比Stargate慢一些?听起来他们现在都在更积极地投入。黄仁勋:因为第二个指数增长。我们已经经历了一个指数增长,即AI的采用率和参与度呈指数级增长。第二个指数增长是推理能力。这是我们一年前的对话。一年前我们说,当你把AI从一次性的、记忆答案转变为记忆和泛化(这基本上是预训练)时,比如记住8乘8等于多少,这就是一次性AI。一年前,推理出现了,工具使用出现了,现在你有了思考型AI,计算量要大10亿倍。主持人:某些超大规模客户有内部工作负载,他们无论如何都必须从通用计算迁移到加速计算,所以他们持续建设。我认为也许一些超大规模厂商有不同的工作负载,所以他们不确定能多快消化,但现在每个人都得出结论,他们严重建设不足。黄仁勋:我最喜欢的应用之一就是传统的数据处理,包括结构化和非结构化数据。我们很快将宣布一个非常大的加速数据处理计划。数据处理代表了当今世界上绝大多数CPU的使用。它仍然完全在CPU上运行。如果你去Databricks,主要是CPU;去Snowflake,主要是CPU;Oracle的SQL处理,主要是CPU。每个人都在使用CPU进行SQL结构化数据处理。未来,这些都将转向AI数据处理。这是一个我们将要进入的巨大市场。但你需要NVIDIA所做的一切,需要加速层和特定领域的数据处理配方,我们必须去构建这些。主持人:我昨天打开CNBC时,他们在谈论供应过剩泡沫;当我打开彭博时,谈论的是循环收入问题。黄仁勋:当有人质疑我们与OpenAI等公司的投资和业务关系时,我需要澄清几个要点。首先,循环收入安排通常指公司进入误导性交易,人为夸大收入而没有任何潜在经济实质。换句话说,这是通过金融工程而非客户需求推动的增长。大家都在引用的典型案例当然是25年前上一次泡沫中的思科和北电。当我们或微软、亚马逊投资那些同时也是我们大客户的公司时,比如我们投资OpenAI而OpenAI购买数百亿芯片,我想提醒大家,那些在彭博等平台的分析师对循环收入或往返交易的过度担忧是错误的。10吉瓦大约需要4000亿美元的投资,这4000亿美元很大程度上需要通过他们的承购协议来资助,也就是他们指数级增长的收入。这必须通过他们的资本、通过股权融资和能够筹集的债务来资助。这是三种融资工具。他们能够筹集的股权和债务与他们能够维持的收入信心有关。聪明的投资者和放贷者会考虑所有这些因素。从根本上说,这就是他们要做的,这是他们的公司,不是我的业务。当然,我们必须与他们保持密切联系,确保我们的建设支持他们的持续增长。收入方面与投资方面没有任何关系。投资方面不与任何东西绑定,这是投资他们的机会。正如我们之前提到的,这很可能成为下一个数万亿美元的超大规模公司。谁不想成为其中的投资者呢?我唯一的遗憾是他们很早就邀请我们投资。我记得那些对话,我们当时太穷了,没有投资足够多,我应该把所有钱都给他们。现实是,如果我们不做好本职工作跟上步伐,如果Rubin没有成为好芯片,他们可以采购其他芯片放入这些数据中心。他们没有义务必须使用我们的芯片。正如前面所说,我们将此视为机会性股权投资。我们还做了一些很好的投资,我必须说出来,我们投资了XAI,投资了Corewave,这些都很棒,非常明智。回到根本问题,我们公开透明地说明我们在做什么。这里有潜在的经济实质,我们不是简单地在两家公司之间来回发送收入。每月有人为ChatGPT付费,15亿月活跃用户在使用产品。每个企业要么采用这项技术,要么将会死亡。每个主权国家都将此视为对其国家安全和经济安全的生存威胁,如同核能一样。哪个人、公司或国家会说智能对我们来说基本上是可选的?这对他们来说是根本性的。智能的自动化,我已经充分讨论了需求问题。主持人:谈到系统设计,2024年我们转向年度发布周期,从Hopper开始。然后我们进行了大规模升级,需要重大数据中心改造,推出了Grace Blackwell。2025年和2026年下半年,我们将推出Vera Rubin。2027年推出Ultra,2028年推出Fineman。转向年度发布周期进展如何?主要目标是什么?AI是否帮助我们执行年度发布周期?答案是肯定的。没有AI,英伟达的速度、节奏和规模都会受到限制。黄仁勋:最后一个问题的答案是肯定的。如今没有AI,根本不可能构建我们所构建的产品。为什么要这样做?记住Eddie在财报电话会议上说过,Satya说过,Sam也说过,token生成率正在指数级增长,客户使用量也在指数级增长。我认为他们有大约8亿周活跃用户。这距离ChatGPT发布不到两年时间。每个用户都在生成更多token,因为他们在使用推理时间推理。首先,因为token生成率以令人难以置信的速度增长,两个指数级叠加在一起,除非我们以令人难以置信的速度提高性能,否则token生成成本将持续增长,因为摩尔定律已死,晶体管成本每年基本相同,功耗也基本相同。在这两个基本定律之间,除非我们提出新技术来降低成本,即使增长有轻微差异,给某人几个百分点的折扣,如何弥补两个指数级增长?因此,我们必须每年以跟上指数级增长的速度提高性能。从Kepler一直到Hopper,可能是10万倍增长,这是英伟达AI之旅的开始,10年内10万倍。在Hopper和Blackwell之间,由于NVLink 72,我们在一年内实现了30倍增长,然后Rubin将再次获得X倍因子,Fineman也将获得另一个X倍因子。我们这样做是因为晶体管对我们帮助不大,摩尔定律基本上密度在增长但性能没有。如果是这样,我们面临的挑战之一是必须在系统级别分解整个问题,同时更改每个芯片以及所有软件堆栈和所有系统。这是终极极端协同设计,以前没有人在这个级别进行协同设计。我们革新CPU、GPU、网络芯片、NVLink扩展、Spectrum X横向扩展。有人说\"哦是的,这只是以太网\"。Spectrum X以太网不只是以太网。人们开始发现,天哪,X倍因子相当令人难以置信。英伟达的以太网业务,仅仅是以太网业务,就是世界上增长最快的以太网业务。我们现在需要扩展规模,当然要构建更大的系统。我们跨多个AI工厂进行扩展,将它们连接在一起。我们以年为周期进行这项工作。因此我们现在在技术方面实现了指数的指数增长。这使我们的客户能够降低代币成本,通过预训练、后训练和思考让这些代币变得越来越智能。结果是,当AI变得更智能时,它们的使用量就会增加。当使用量增加时,它们将呈指数级增长。对于那些可能不太熟悉的人来说,什么是极限协同设计?极限协同设计意味着你必须同时优化模型、算法、系统和芯片。你必须在盒子外面创新。因为摩尔定律说你只需要让CPU变得越来越快,一切都会变快。你是在盒子内创新,只是让芯片变得更快。但如果那个芯片不能再快了,那你要怎么办?在盒子外面创新。英伟达真正改变了事情,因为我们做了两件事。我们发明了CUDA,发明了GPU,我们发明了大规模协同设计的理念。这就是为什么我们涉足所有这些行业。我们创建所有这些库和协同设计。首先,全栈极限甚至超越了软件和GPU。现在是在数据中心级别的交换机和网络,所有这些交换机和网络中的软件、网络接口卡、扩展、横向扩展,在所有这些方面进行优化。因此,Blackwell相比Hopper有30倍的提升,没有摩尔定律能实现这一点。这就是极限,这来自于极限协同设计。这就是英伟达进入网络和交换、扩展和横向扩展、跨系统扩展,构建CPU和GPU以及网络接口卡的原因。这就是英伟达在软件和人才方面如此丰富的原因。我们向世界贡献的开源软件数量超过几乎任何公司,除了一家公司,我想是AI2之类的。我们拥有如此庞大的软件资源,这还仅仅是在AI方面。别忘了计算机图形学、数字生物学、自动驾驶汽车,我们公司生产的软件数量是令人难以置信的,这使我们能够进行深度和极限的协同设计。主持人:我从你的一个竞争对手那里听说,是的,他这样做是因为这有助于降低代币生成成本,但同时你的年度发布周期让你的竞争对手几乎不可能跟上。供应链被锁定更多,因为你给供应链提供三年的可见性。现在供应链对他们能够构建什么有了信心。你是否考虑过这个?黄仁勋:在你问这个问题之前,想想这个。为了让我们每年进行数千亿美元的AI基础设施建设,想想我们一年前必须开始准备多少产能。我们在谈论建设数千亿美元的晶圆启动和DRAM采购。这现在已经达到几乎没有公司能跟上的规模。主持人:那么你认为你的竞争护城河比三年前更大吗?黄仁勋:是的。首先,现在的竞争比以往任何时候都激烈,但也比以往任何时候都困难。我这样说的原因是晶圆成本越来越高,这意味着除非你进行极限规模的协同设计,否则你就无法实现X倍增长因子。除非你每年研发六、七、八个芯片,这很了不起。这不是关于构建一个ASIC,而是关于构建一个AI工厂系统。这个系统有很多芯片,它们都是协同设计的,一起提供我们几乎定期获得的10倍因子。第一,协同设计是极限的。第二,规模是极限的。当你的客户部署一千兆瓦时,那是40万到50万个GPU。让50万个GPU协同工作是一个奇迹。你的客户在你身上承担了巨大的风险去购买所有这些。你得问问自己,什么客户会在一个架构上下500亿美元的采购订单?在一个未经验证的架构,一个新架构上。一个全新的芯片,你对此很兴奋,每个人都为你兴奋,你刚刚展示了第一个硅片。谁会给你500亿美元的采购订单?你为什么要为一个刚刚流片的芯片启动价值500亿美元的晶圆?但对于英伟达,我们能做到这一点,因为我们的架构已经得到验证。我们客户的规模如此令人难以置信。现在我们供应链的规模也令人难以置信。谁会为一家公司启动所有这些,预先构建所有这些,除非他们知道英伟达能够交付?他们相信我们能够向全世界的客户交付。他们愿意一次启动数千亿美元。这个规模令人难以置信。主持人:关于这一点,世界上最大的关键争论和争议之一是GPU与ASIC的问题,谷歌的TPU、亚马逊的Trainium,似乎从ARM到OpenAI到Anthropic都传言在构建ASIC。去年你说我们在构建系统,不是芯片,你通过堆栈的每个部分驱动性能。你还说这些项目中的许多可能永远不会达到生产规模。但考虑到大多数项目,以及谷歌TPU的明显成功,你如何看待今天这个不断演变的格局?黄仁勋:首先,谷歌拥有的优势是前瞻性。记住,他们在一切开始之前就启动了TPU1。这与初创公司没有什么不同。你应该在市场增长之前创建初创公司。当市场达到万亿美元规模时,你不应该作为初创公司出现。这是一个谬论,所有风投都知道这个谬论,即如果你能在大市场中占据几个百分点的市场份额,你就能成为一家巨型公司。这实际上是根本错误的。你应该在一个小行业中占据100%的市场份额,就像Nvidia和TPU所做的那样。当时只有我们两家公司,但你必须希望这个行业能够真正做大。你在创造一个行业。Nvidia的故事说明了这一点。对于现在构建ASIC的公司来说,这是一个挑战。虽然现在看起来市场很诱人,但记住这个诱人的市场已经从一个叫GPU的芯片演变成了我刚才描述的AI工厂。你们刚刚看到我宣布了一款叫CPX的芯片,用于上下文处理和扩散视频生成,这是一个非常专业的工作负载,但在数据中心内是重要的工作负载。我刚才提到了AI数据处理处理器的可能性,因为你需要长期记忆和短期记忆。KV缓存处理非常密集。AI内存是个大问题。你希望你的AI有良好的记忆,仅仅处理整个系统的KV缓存就是非常复杂的事情,可能需要专门的处理器。你可以看到,Nvidia的观点现在不再只是GPU。我们的观点是审视整个AI基础设施,以及这些优秀公司需要什么来处理他们多样化且不断变化的工作负载。看看transformer。transformer架构正在发生巨大变化。如果不是因为CUDA易于操作和迭代,他们如何尝试大量实验来决定使用哪个transformer版本、什么样的注意力算法?如何分解?CUDA帮助你完成所有这些,因为它非常可编程。现在思考我们业务的方式是,当所有这些ASIC公司或ASIC项目在三、四、五年前开始时,我必须告诉你,那个行业非常简单。涉及一个GPU。但现在它变得庞大而复杂,再过两年它将变得完全庞大。规模将非常巨大。所以我认为,作为新进入者进入一个非常大的市场的战斗是困难的。即使对于那些在ASIC上可能成功的客户来说也是如此。主持人:投资者往往是二元化的生物,他们只想要是或否的黑白答案。但即使你让ASIC工作起来,难道不存在最优平衡吗?因为我认为购买Nvidia平台时,CPX将推出用于预填充、视频生成,可能还有解码等。黄仁勋:是的,所以会有许多不同的芯片或部件添加到Nvidia生态系统的加速计算集群中,随着新工作负载的产生。现在试图流片新芯片的人们并没有真正预测一年后会发生什么,他们只是试图让芯片工作。换句话说,谷歌是GPU的大客户。谷歌是一个非常特殊的案例,我们必须表示应有的尊重。TPU已经到了TPU7。这对他们来说也是一个挑战,他们所做的工作非常困难。让我说明一下,有三类芯片。第一类是架构芯片:X86 CPU、ARM CPU、Nvidia GPU。它们是架构性的,上面有生态系统,架构拥有丰富的IP和生态系统,技术非常复杂,由像我们这样的所有者构建。第二类是ASIC。我曾为发明ASIC概念的原始公司LSI Logic工作。如你所知,LSI Logic已经不存在了。原因是ASIC在市场规模不是很大时确实很棒,很容易有人作为承包商帮助你将所有东西打包并代表你进行制造,他们收取50-60%的毛利率。但当ASIC市场变大时,就有了一种新的做法叫做客户自有工具。谁会这样做?苹果的智能手机芯片。苹果的智能手机芯片量非常大,他们永远不会付给别人50-60%的毛利率来做ASIC。他们使用客户自有工具。那么当TPU成为大生意时会走向何方?客户自有工具,毫无疑问。但ASIC有其位置。视频转码器永远不会太大,智能网卡永远不会太大。当一个ASIC公司有10、12、15个ASIC项目时,我并不感到惊讶,因为可能有五个智能网卡和四个转码器。它们都是AI芯片吗?当然不是。如果有人为特定推荐系统构建嵌入式处理器作为ASIC,当然可以做到。但你会将其作为一直在变化的AI的基础计算引擎吗?你有低延迟工作负载、高吞吐量工作负载、聊天的令牌生成、思考工作负载、AI视频生成工作负载。主持人:你在谈论加速计算的主干。黄仁勋:这就是Nvidia的全部意义所在。主持人:简单来说,这就像下国际象棋和跳棋的区别。事实是,今天开始做ASIC的公司,无论是Tranium还是其他一些加速器,他们正在构建一个芯片,它只是更大机器的一个组件。你构建了一个非常复杂的系统、平台、工厂,现在你在某种程度上开放了一点。你提到了CPX GPU,在某些方面,你正在将工作负载分解到该特定领域的最佳硬件片段。黄仁勋:我们宣布了一个叫Dynamo的东西,分解的AI工作负载编排,我们开源了它,因为未来的AI工厂是分解的。主持人:你推出了MV Fusion,甚至对你的竞争对手包括Intel说,你参与我们正在构建的这个工厂的方式,因为没有其他人疯狂到尝试构建整个工厂,但如果你有足够好、足够引人注目的产品,让最终用户说\"我们想用这个而不是ARM GPU,或者我们想用这个而不是你的推理加速器\",你可以接入其中。黄仁勋:我们很高兴能够建立连接。融合确实是一个非常棒的想法,我们很高兴能与英特尔在这方面合作。这需要利用英特尔的生态系统,世界上大部分企业仍然运行在英特尔平台上。我们将英特尔生态系统与英伟达AI生态系统、加速计算融合在一起。我们与ARM也是这样做的,还有其他几家公司我们也将与其合作。这为我们双方都打开了机会,对双方都是共赢。我将成为他们的大客户,而他们将让我们接触到更大的市场机会。这与您提出的一个观点密切相关,这个观点可能让一些人感到震惊。您说我们的竞争对手在构建ASIC,他们所有的芯片在今天已经更便宜了,但他们甚至可以将价格定为零。我们的目标是,即使他们将芯片价格定为零,您仍然会购买英伟达系统,因为运营该系统的总成本——电力、数据中心、土地等,产出的智能仍然比购买芯片更划算,即使芯片是免费的。因为土地、电力和基础设施已经价值150亿美元。我们已经对这个数学问题进行了分析。主持人:但请您为我们解释一下您的计算,因为我认为对于那些不经常思考这个问题的人来说,这确实难以理解。您的芯片如此昂贵,竞争对手的芯片价格为零,怎么可能您的芯片仍然是更好的选择?黄仁勋:有两种思考方式。一种是从收入角度来考虑。每个人都受到功耗限制,假设您能够获得额外的2千兆瓦功率。那么这2千兆瓦的功率您希望能够转化为收入。如果您的性能或每瓦token数是其他人的两倍,因为您进行了深度和极端的代码设计,我的每单位能耗性能要高得多,那么我的客户就可以从他们的数据中心产生两倍的收入。谁不想要两倍的收入呢?如果有人给他们15%的折扣,我们的毛利率(大约75个百分点)与其他人的毛利率(大约50到65个百分点)之间的差异,不足以弥补Blackwell和Hopper之间30倍的性能差异。假设Hopper是一款出色的芯片和系统,假设其他人的ASIC就是Hopper。Blackwell的性能是其30倍。因此在那1千兆瓦中,您必须放弃30倍的收入。这个代价太大了。所以即使他们免费提供芯片,您只有2千兆瓦可以使用,机会成本极高。您总是会选择最佳的每瓦性能。主持人:我从一位超大规模云服务提供商的CFO那里听到,鉴于您的芯片带来的性能提升,正是针对每千兆瓦token数这一点,以及电力成为限制因素,他们必须升级到新的周期。当您展望Ruben、Ruben Ultra、Fineman时,这种趋势会继续吗?黄仁勋:我们现在每年构建六七款芯片,每一款都是系统的一部分。系统软件无处不在,需要在所有这六七款芯片上进行集成和优化,才能实现Blackwell的30倍性能提升。现在想象我每年都在这样做,持续不断。如果您在这一系列芯片中构建一个ASIC,而我们在整个系统中进行优化,这是一个难以解决的问题。主持人:这让我回到开始时关于竞争护城河的问题。我们一直在关注这一点,我们是生态系统的投资者,也投资您的竞争对手,从谷歌到博通。但当我从第一原理思考这个问题时,您是在增加还是减少您的竞争护城河?您转向年度节奏,与供应链共同开发。规模比任何人预期的都要大得多,这需要资产负债表和开发的规模。您通过收购和有机增长所做的举措,包括Envy Fusion、CPX等我们刚才讨论的。所有这些因素让我相信,至少在构建工厂或系统方面,您的竞争护城河正在增强。这至少是令人惊讶的。但有趣的是,您的市盈率比其他大多数公司要低得多。我认为这部分与大数法则有关。一家4.5万亿美元的公司不可能再变得更大了。但我一年半前就问过您这个问题,今天您坐在这里,如果市场将发展到AI工作负载增长10倍或5倍,我们知道资本支出在做什么等等。在您看来,有没有任何可以想象的情况,您在5年后的营收不会比2025年高2倍或3倍?考虑到这些优势,营收实际上不会比今天高很多的概率是多少?黄仁勋:我这样回答。正如我所描述的,我们的机会比共识要大得多。我在这里说,我认为英伟达很可能成为第一家10万亿美元的公司。我在这里待了足够长的时间,就在十年前,您应该记得,人们说永远不可能有万亿美元的公司。现在我们有10家。主持人:今天世界更大了,回到GDP和增长的指数级发展。黄仁勋:世界更大了,人们误解了我们所做的事情。他们记得我们是一家芯片公司,我们确实制造芯片,我们制造世界上最出色的芯片。但英伟达实际上是一家AI基础设施公司。我们是您的AI基础设施合作伙伴,我们与OpenAI的合作就是完美的证明。我们是他们的AI基础设施合作伙伴,我们以多种不同方式与人们合作。我们不要求任何人从我们这里购买所有产品。我们不要求他们购买整个机架。他们可以购买芯片、组件、我们的网络设备。我们有客户只购买我们的CPU,只购买我们的GPU而购买其他人的CPU和网络设备。我们可以按照您喜欢的任何方式销售。我唯一的要求就是从我们这里买点什么。主持人:我们讨论了埃隆·马斯克与X.ai以及Colossus 2项目。正如我之前提到的,这不仅仅是关于更好的模型,我们还必须建设。我们必须拥有世界级的建设者。而我认为我们国家最顶尖的建设者可能就是埃隆·马斯克。我们谈到了Colossus 1以及他在那里所做的工作,建立了一个由数十万台H100和H200组成的连贯集群。现在他正在开发Colossus 2,这可能包含50万台GPU,相当于数百万台H100在一个连贯集群中。我不会感到惊讶,如果他比任何人都更快达到千兆瓦级别。关于成为既能构建软件和模型,又理解构建这些集群所需条件的建设者,这有什么优势?黄仁勋:我想说,这些AI超级计算机是复杂的系统。技术很复杂,由于融资问题,采购也很复杂。获得土地、电力和基础设施,为其供电都很复杂,建设并启动所有这些系统。这可能是人类迄今为止所承担的最复杂的系统问题。埃隆有一个巨大的优势,就是在他的头脑中,所有这些系统都在相互协作,所有的相互依赖关系都存在于一个头脑中,包括融资。所以他就是一台大型GPT,他本身就是一台大型超级计算机,是终极GPU。他在这方面有很大优势,而且他有强烈的紧迫感,有建设它的真正愿望。当意志与技能结合时,不可思议的事情就会发生。这是相当独特的。主持人:接下来我想谈论主权AI以及全球AI竞赛。30年前,你无法想象你会在宫殿里与王储和国王交流,经常出入白宫。总统说你和英伟达对美国国家安全至关重要。当我审视这种情况时,如果各国政府不认为这至少像我们在1940年代对待核技术那样具有生存意义,你很难出现在那些场所。虽然我们今天没有政府资助的曼哈顿计划,但它由英伟达、OpenAI、Meta、谷歌资助。我们今天拥有与国家同等规模的公司。这些公司正在资助总统和国王们认为对其未来经济和国家安全具有生存意义的事业。你同意这种观点吗?黄仁勋:没有人需要原子弹,但每个人都需要AI。这是一个非常大的区别。AI就是现代软件。这就是我的出发点——从通用计算到加速计算,从人工逐行编写代码到AI编写代码。这个基础不能被遗忘,我们已经重新发明了计算。地球上没有新物种出现,我们只是重新发明了计算。每个人都需要计算,它需要被民主化。这就是为什么所有国家都意识到他们必须进入AI世界,因为每个人都需要跟上计算的步伐。世界上没有人会说:\"我昨天还在使用计算机,明天我准备重新使用棍棒和火。\"每个人都需要向计算转移,它只是在现代化而已。首先,要参与AI,你必须在AI中编码你的历史、文化和价值观。当然,AI变得越来越智能,即使核心AI也能够相当快速地学习这些东西。你不必从零开始。所以我认为每个国家都需要拥有一些主权能力。我建议他们都使用OpenAI、Gemini、这些开放模型和Grok,我建议他们都使用Anthropic。但他们也应该投入资源学习如何构建AI。原因是他们需要学习如何构建它,不仅仅是为了语言模型,还需要为工业模型、制造模型、国家安全模型构建AI。有很多智能需要他们自己培养。所以他们应该拥有主权能力,每个国家都应该发展它。你在世界各地听到的是否也是这样?他们都意识到了这一点。他们都将成为OpenAI、Anthropic、Grok和Gemini的客户,但他们也确实需要构建自己的基础设施。这就是英伟达所做的重要理念——我们在构建基础设施。正如每个国家都需要能源基础设施和通信互联网基础设施一样,现在每个国家都需要AI基础设施。让我们从世界其他地方开始。我们的好朋友大卫·萨克斯,AI团队做得非常出色。我们非常幸运能在华盛顿特区拥有大卫和什里拉姆。大卫在担任AISR的工作,特朗普总统把他们安排在白House是多么明智的举措。在这个关键时刻,技术很复杂。什里拉姆是我认为华盛顿特区唯一懂CUDA的人,这虽然有些奇怪。但我就是喜欢这样的事实:在这个技术复杂、政策复杂、对我们国家未来影响如此重大的关键时刻,我们有一个头脑清晰、投入时间理解技术、深思熟虑地帮助我们度过难关的人。科技就像过去的玉米和钢铁一样,现在是如此基本的贸易机会。这是贸易的重要组成部分。为什么你不希望美国技术被每个人渴望,这样它就可以用于贸易?特朗普已经做了几件事情,所做的事情对于让每个人都跟上来说是非常好的。第一件事是美国的再工业化,鼓励公司来美国建设,投资工厂,以及对技术工人队伍进行再培训和提升技能,这对我们国家来说极其宝贵。我们喜欢工艺,我喜欢用双手制造东西的人,现在我们要回去建造东西,建造宏伟不可思议的东西,我喜欢这样。这将改变美国,毫无疑问。我们必须认识到,美国的再工业化从根本上将是变革性的。然后当然是AI。它是最大的均衡器。想想每个人现在都可以拥有AI。这是终极均衡器。我们已经消除了技术鸿沟。记住上一次有人必须学习使用计算机来获得经济或职业利益时,他们必须学习C++或C,或者至少是Python。现在他们只需要学习人类语言。如果你不知道如何为AI编程,你告诉AI:\"嗨,我不知道如何为AI编程。我如何为AI编程?\"AI就会向你解释或为你做。它为你做。这太不可思议了,不是吗?我们现在用技术消除了技术鸿沟。这是每个人都必须参与的事情。OpenAI有8亿活跃用户。天哪,它真的需要达到60亿。它真的需要很快达到80亿。我认为这是第一点。然后第二点,第三点,我认为AI将改变任务。人们搞混的是,有许多任务将被消除,有许多任务实际上将被创造出来。但很可能对许多人来说,他们的工作是受到有效保护的。例如,我一直在使用AI。你一直在使用AI。我的分析师一直在使用AI。我的工程师,他们每个人都在持续使用AI。我们正在雇用更多工程师,雇用更多人,全面招聘。原因是我们有更多想法。我们现在可以追求更多想法。原因是我们的公司变得更有生产力。因为我们变得更有生产力,我们变得更富有。我们变得更富有,我们可以雇用更多人去追求这些想法。AI带来大规模工作破坏的概念始于一个前提,即我们没有更多想法了。它始于我们没有什么可做的前提。我们今天在生活中做的一切就是终点。如果别人为我做那一项任务,我就只剩下一项任务了。现在我必须坐在那里等待什么,等待退休,坐在我的摇椅上。这个想法对我来说没有意义。我认为智能不是零和游戏。我周围的人越聪明,我周围的天才越多,令人惊讶的是,我的想法就越多,我想象我们可以解决的问题就越多,我们创造的工作就越多,我们创造的就业机会就越多。我不知道一百万年后的世界会是什么样子,那将留给我的孩子们。但在接下来的几十年里,我的感觉是经济将增长。很多新工作将被创造出来。每个工作都将被改变。一些工作将丢失。我们不会在街上骑马,这些事情都会很好。人类在理解复合系统方面出了名的怀疑和糟糕,他们在理解随规模加速的指数系统方面更糟糕。主持人:我们今天谈论了很多指数。伟大的未来学家雷·库兹韦尔说,在21世纪,我们不会有一百年的进步。我们可能会有两万年的进步。你之前说过,我们很幸运生活在这个时刻并为这个时刻做出贡献。我不会要求你展望10年、20年或30年,因为我认为这太有挑战性了。但当我们思考2030年时,像机器人这样的事物,30年比2030年更容易。真的吗?是的。好的,所以我会给你许可去展望30年。当你思考这个过程时,我喜欢这些较短的时间框架,因为它们必须将比特和原子结合起来,这是构建这些东西的困难部分。每个人都在说这即将发生,这很有趣但并不完全有用。但如果我们有2万年的进步,请思考Ray的那句话,思考指数级增长,以及我们所有的听众——无论你们在政府工作、在初创公司、还是在经营大公司——都需要思考加速变化的速度、加速增长的速度,以及你们如何在这个新世界中与人工智能协同合作。黄仁勋:许多人已经说了很多事情,它们都很合理。我认为在接下来的5年里,真正酷且将被解决的事情之一是人工智能与机器人技术的融合。我们将拥有在我们身边游走的人工智能。每个人都知道,我们都将与自己的R2-D2一起成长。那个R2-D2会记住关于我们的一切,沿途指导我们,成为我们的伙伴。我们已经知道这一点。每个人都将在云端拥有自己关联的GPU,80亿人口对应80亿个GPU,这是一个可行的结果。每个人都拥有为自己量身定制的模型。那个在云端的人工智能也会体现在各种地方——你的汽车里、你自己的机器人里,它无处不在。我认为这样的未来是非常合理的。我们将理解生物学的无限复杂性,理解生物学系统以及如何预测它,拥有每个人的数字孪生。我们在医疗保健方面拥有自己的数字孪生,就像我们在亚马逊购物有数字孪生一样,为什么不在医疗保健方面拥有呢?当然会有。一个能预测我们如何衰老、可能患什么疾病,以及即将发生的任何事情的系统——也许是下周甚至明天下午——并提前预测。当然,我们会拥有所有这些。我认为所有这些都是理所当然的。我经常被合作的CEO们问到,既然有了所有这些,会发生什么?你该怎么做?这是快速发展事物的常识。如果你有一列即将越来越快并呈指数级发展的火车,你真正需要做的就是上车。一旦你上了车,你就会在路上弄清楚其他一切。试图预测那列火车会在哪里,然后朝它射子弹,或者预测那列火车会在哪里——它每秒都在指数级加速——然后弄清楚在哪个路口等它,这是不可能的。只要在它行驶相对缓慢时上车,然后一起呈指数级发展。主持人:很多人认为这只是一夜之间发生的。你已经在这个领域工作了35年。我记得大约在2005年或2006年听Larry Page说,Google的最终状态是当机器能够在你问问题之前就预测到问题,并在你不用查找的情况下给你答案。我在2016年听Bill Gates说,当有人说所有事情都已经完成了——我们有了互联网、云计算、移动、社交等等。他说:\"我们甚至还没有开始。\"有人问:\"你为什么这么说?\"他说:\"直到机器从愚蠢的计算器变成开始为自己思考、与我们一起思考时,我们才真正开始。\"那就是我们所处的时刻。拥有像你这样的领导者,像Sam和Elon、Satya等人,对这个国家来说是如此非凡的优势。看到我们拥有的风险资本系统之间的合作——我参与其中,能够为人们提供风险资本。这确实是一个非凡的时代。但我也认为,我感激的一点是,我们拥有的领导者也理解他们的责任——我们正以加速的速度创造变化。我们知道虽然这很可能对绝大多数人都是好事,但路上会有挑战。我们会在挑战出现时处理它们,为每个人提高底线,确保这是一个胜利,不仅仅是为了硅谷的一些精英。不要吓到他们,要带上他们。我们会做到的。黄仁勋:是的。主持人:所以,谢谢你。黄仁勋:完全正确。","news_type":1,"symbols_score_info":{"NVDA":1.1}},"isVote":1,"tweetType":1,"viewCount":217,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0},{"id":197168962969744,"gmtCreate":1689142112564,"gmtModify":1689142412733,"author":{"id":"4138941098349802","authorId":"4138941098349802","name":"edmoney","avatar":"https://community-static.tradeup.com/news/default-avatar.jpg","crmLevel":2,"crmLevelSwitch":0,"followedFlag":false,"authorIdStr":"4138941098349802","idStr":"4138941098349802"},"themes":[],"htmlText":"<a href=\"https://ttm.financial/S/00005\">$汇丰控股(00005)$ </a>..","listText":"<a href=\"https://ttm.financial/S/00005\">$汇丰控股(00005)$ </a>..","text":"$汇丰控股(00005)$ ..","images":[],"top":1,"highlighted":1,"essential":1,"paper":1,"likeSize":0,"commentSize":0,"repostSize":0,"link":"https://ttm.financial/post/197168962969744","isVote":1,"tweetType":1,"viewCount":360,"authorTweetTopStatus":1,"verified":2,"comments":[],"imageCount":0,"langContent":"EN","totalScore":0}],"defaultTab":"followers","isTTM":true}