Unnamed: 0
int64
symbol
string
quarter
int64
year
int64
date
string
company_name
string
company_id
float64
text
string
5,800
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Operator: Your next question comes from the line of Aaron Rakers of Wells Fargo. Your line is open. Aaron Rakers: Yes, thanks for taking the question. I wanted to ask you as we kind of focus on the Blackwell cycle and think about the data center business. When I look at the results this last quarter, Colette, you mentioned that obviously, the networking business was down about 15% sequentially, but then your comments were that you were seeing very strong demand. You mentioned also that you had multiple cloud CSP design wins for these large-scale clusters. So I'm curious if you could unpack what's going on in the networking business and where maybe you've seen some constraints and just your confidence in the pace of Spectrum-X progressing to that multiple billions of dollars that you previously had talked about. Thank you. Colette Kress: Let's first start with the networking. The growth year-over-year is tremendous and our focus since the beginning of our acquisition of Mellanox has really been about building together the work that we do in terms of -- in the Data Center. The networking is such a critical part of that. Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we're going to be right back up in terms of growing. They're getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems that we are providing them to. Operator: Your next question comes from the line of Atif Malik of Citi. Your line is open.
5,801
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Operator: Your next question comes from the line of Atif Malik of Citi. Your line is open. Atif Malik: Thank you for taking my question. I have two quick ones for Collette. Colette, on the last earnings call, you mentioned that sovereign demand is in low double-digit billions. Can you provide an update on that? And then can you explain the supply-constrained situation in gaming? Is that because you're shifting your supply towards data center? Colette Kress: So first starting in terms of sovereign AI, such an important part of growth, something that is really surfaced with the onset of generative AI and building models in the individual countries around the world. And we see a lot of them and we talked about a lot of them in the call today and the work that they are doing. So our sovereign AI and our pipeline going forward is still absolutely intact as those are working to build these foundational models in their own language, in their own culture, and working in terms of the enterprises within those countries. And I think you'll continue to see this be growth opportunities that you may see with our regional clouds that are being stored up and/or those that are focusing in terms of AI factories for many parts of the sovereign AI. This is areas where this is growing not only in terms of in Europe, but you're also seeing this in terms of growth in terms of -- in the Asia-Pac as well. Let me flip to your second question that you asked regarding gaming. So our gaming right now from a supply, we're busy trying to make sure that we can ramp all of our different products. And in this case, our gaming supply, given what we saw selling through was moving quite fast. Now the challenge that we have is how fast could we get that supply getting ready into the market for this quarter. Not to worry, I think we'll be back on track with more suppliers we turn the corner into the new calendar year. We're just going to be tight for this quarter.
5,802
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Operator: Your next question comes from the line of Ben Reitzes of Melius Research. Your line is open. Ben Reitzes: Yes. Hi. Thanks a lot for the question. I wanted to ask Colette and Jensen with regard to sequential growth. So very strong sequential growth this quarter and you're guiding to about 7%. Do your comments on Blackwell imply that we reaccelerate from there as you get more supply? Just in the first half, it would seem that there would be some catch-ups. So I was wondering how prescriptive you could be there. And then, Jensen, just overall, with the change in administration that's going to take place here in the US and the China situation, have you gotten any sense, or any conversations about tariffs, or anything with regard to your China business? Any sense of what may or may not go on? It's probably too early, but wondering if you had any thoughts there. Thanks so much. Jensen Huang: We guide one quarter at a time. Colette Kress: We are working right now on the quarter that we're in and building what we need to ship in terms of Blackwell. We have every supplier on the planet working seamlessly with us to do that. And once we get to next quarter, we'll help you understand in terms of that ramp that we'll see to the next quarter and after that. Jensen Huang: Whatever the new administration decides, we will of course support the administration. And that's our -- the highest mandate. And then after that, do the best we can. And just as we always do. And so we have to simultaneously and we will comply with any regulation that comes along fully and support our customers to the best of our abilities and compete in the marketplace. We'll do all of these three things simultaneously. Operator: Your final question comes from the line of Pierre Ferragu of New Street Research. Your line is open.
5,803
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Operator: Your final question comes from the line of Pierre Ferragu of New Street Research. Your line is open. Pierre Ferragu: Hey, thanks for taking my question. Jensen, you mentioned in your comments you have the pre-trainings, the actual language models and you have reinforcement learning that becomes more and more important in training and in inference as well. And then you have inference itself. And I was wondering if you have a sense like a high-level typical sense of out of an overall AI ecosystem like maybe one of your clients or one of the large models that are out there. Today, how much of the compute goes into each of these buckets? How much for the pre-training, how much for the reinforcement, and how much into inference today? Do you have any sense for how it's splitting and where the growth is the most important as well?
5,804
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Well, today it's vastly in pre-training a foundation model because as you know, post-training, the new technologies are just coming online and whatever you could do in pre-training and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do priority. And so you'll always have to do on-the-spot thinking and in-context thinking and a reflection. And so I think that the fact that all three are scaling is actually very sensible based on what we are. And in the area of foundation model, now we have multimodality foundation models and the amount of petabytes of video that these foundation models are going to be trained on is incredible. And so my expectation is that for the foreseeable future, we're going to be scaling pre-training, post-training as well as inference time scaling and which is the reason why I think we're going to need more and more compute and we're going to have to drive as hard as we can to keep increasing the performance by X factors at a time so that we can continue to drive down the cost and continue to increase their revenues and get the AI revolution going. Thank you. Operator: Thank you. I'll now turn the call back over to Jensen Huang for closing remarks.
5,805
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Thank you. The tremendous growth in our business is being fueled by two fundamental trends that are driving global adoption of NVIDIA computing. First, the computing stack is undergoing a reinvention, a platform shift from coding to machine learning. From executing code on CPUs to processing neural networks on GPUs. The trillion-dollar installed base of traditional Data center infrastructure is being rebuilt for Software 2.0, which applies machine learning to produce AI. Second, the age of AI is in full steam. Generative AI is not just a new software capability, but a new industry with AI factories manufacturing digital intelligence, a new industrial revolution that can create a multi-trillion dollar AI industry. Demand for Hopper and anticipation for Blackwell, which is now in full production are incredible for several reasons. There are more foundation model makers now than there were a year ago. The computing scale of pre-training and post-training continues to grow exponentially. There are more AI-native start-ups than ever and the number of successful inference services is rising. And with the introduction of ChatGPT o1, OpenAI o1, a new scaling law called test time scaling has emerged. All of these consume a great deal of computing. AI is transforming every industry, company, and country. Enterprises are adopting agentic AI to revolutionize workflows. Over time, AI coworkers will assist employees in performing their jobs faster and better. Investments in industrial robotics are surging due to breakthroughs in physical AI. Driving new training infrastructure demand as researchers train world foundation models on petabytes of video and Omniverse synthetically generated data. The age of robotics is coming. Countries across the world recognize the fundamental AI trends we are seeing and have awakened to the importance of developing their national AI infrastructure. The age of AI is upon us and it's large and diverse. NVIDIA's expertise, scale, and ability to deliver full stack and full
5,806
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
The age of AI is upon us and it's large and diverse. NVIDIA's expertise, scale, and ability to deliver full stack and full infrastructure let us serve the entire multi-trillion dollar AI and robotics opportunities ahead. From every hyperscale cloud, enterprise private cloud to sovereign regional AI clouds, on-prem to industrial edge and robotics. Thanks for joining us today and catch up next time.
5,807
NVDA
3
2,025
2024-11-20 17:00:00
NVIDIA Corporation
32,307
Operator: This concludes today's conference call. You may now disconnect.
5,808
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: Good afternoon. My name is Abby and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Second Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. And Mr. Stewart Stecker, you may begin your conference.
5,809
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Stewart Stecker: Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I would like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of risks, significant risks, and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K, and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 28th, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. Let me highlight an upcoming event for the financial community. We will be attending the Goldman Sachs Communacopia and Technology Conference on September 11 in San Francisco, where Jensen will participate in a keynote fireside chat. Our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday, November 20th, 2024. With that, let me turn the call over to Colette.
5,810
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Colette Kress: Thanks, Stewart. Q2 was another record quarter. Revenue of $30 billion was up 15% sequentially and up 122% year-on-year and well above our outlook of $28 billion. Starting with data center, data center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year-on-year, driven by strong demand for NVIDIA Hopper, GPU computing, and our networking platforms. Compute revenue grew more than 2.5 times, networking revenue grew more than 2 times from the last year. Cloud service providers represented roughly 45% for our data center revenue and more than 50% stemmed from the consumer, Internet, and enterprise companies. Customers continue to accelerate their Hopper architecture purchases, while gearing up to adopt Blackwell. Key workloads driving our data center growth include generative AI, model training, and inferencing. Video, image, and text data pre and post-processing with CUDA and AI workloads, synthetic data generation, AI-powered recommender systems, SQL, and vector database processing as well. Next-generation models will require 10 to 20 times more compute to train with significantly more data. The trend is expected to continue. Over the trailing four quarters, we estimate that inference drove more than 40% of our data center revenue. CSPs, consumer Internet companies, and enterprises benefit from the incredible throughput and efficiency of NVIDIA's inference platform. Demand for NVIDIA is coming from frontier model makers, consumer Internet services, and tens of thousands of companies and startups building generative AI applications for consumers, advertising, education, enterprise and healthcare, and robotics. Developers desire NVIDIA's rich ecosystem and availability in every cloud. CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high demand. NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet, and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering,
5,811
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
consumer Internet, and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering, over 40% more memory bandwidth compared to the H100. Our data center revenue in China grew sequentially in Q2 and is a significant contributor to our data center revenue. As a percentage of total data center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going-forward. The latest round of MLPerf inference benchmarks highlighted NVIDIA's inference leadership with both NVIDIA, Hopper, and Blackwell platforms combining to win gold medals on all tasks. At Computex, NVIDIA with the top computer manufacturers unveiled an array of Blackwell architecture-powered systems and NVIDIA networking for building AI factories and data centers. With the NVIDIA MGX modular reference architecture, our OEMs and ODM partners are building more than 100 Blackwell-based systems designed quickly and cost-effectively. The NVIDIA Blackwell platform brings together multiple GPU, CPU, DPU, NVLink, NVLink switch, and the networking chips systems, and NVIDIA CUDA software to power the next-generation of AI across the cases, industries, and countries. The NVIDIA GB 200 NVL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30 times faster inference for LLMs, workloads, and unlocking the ability to run trillion parameter models in real-time. Hopper demand is strong and Blackwell is widely sampling. We executed a change to the Blackwell GPU mass to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26. In Q4, we expect to ship several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year. Networking
5,812
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year. Networking revenue increased 16% sequentially. Our Ethernet for AI revenue, which includes our Spectrum-X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings. Spectrum-X has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprise, including X-AI to connect the largest GPU compute cluster in the world. Spectrum-X supercharges Ethernet for AI processing and delivers 1.6 times the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support demand for scaling compute clusters from tens of thousands of DPUs today to millions of GPUs in the near future. Spectrum-X is well on-track to begin a multi-billion dollar product line within a year. Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries. Japan's National Institute of Advanced Industrial Science and Technology is building its AI bridging cloud infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low-double-digit billions this year. The enterprise AI wave has started. Enterprises also drove sequential revenue growth in the quarter. We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. A range of applications are fueling our growth, including AI-powered chatbots, generative AI copilots, and agents to build new monetizable business applications and enhance employee productivity. Amdocs is using NVIDIA generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%. ServiceNow is using NVIDIA for its Now Assist offering, the fastest-growing new product in the company's history. SAP is using NVIDIA to build dual Co-pilots. Cohesity is using NVIDIA to build
5,813
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
new product in the company's history. SAP is using NVIDIA to build dual Co-pilots. Cohesity is using NVIDIA to build their generative AI agent and lower generative AI development costs. Snowflake serves over 3 billion queries a day for over 10,000 enterprise customers is working with NVIDIA to build Copilots. And lastly, Wistron is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%. Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multi-billion dollars in revenue across on-prem and cloud consumption and will grow as next-generation AV models require significantly more compute. Healthcare is also on its way to being a multi-billion dollar business as AI revolutionizes medical imaging, surgical robots, patient care, electronic health record processing, and drug discovery. During the quarter, we announced a new NVIDIA AI foundry service to supercharge generative AI for the world's enterprises with Meta's Llama 3.1, collection of models. This marks a watershed moment for enterprise AI. Companies for the first time can leverage the capabilities of an open-source frontier-level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business. Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications. Nvidia NIM accelerate and simplify model deployment. Companies across healthcare, energy, financial services, retail, transportation, and telecommunications are adopting NIMs, including Aramco, Lowe's, and Uber. AT&T realized 70% cost savings and 8 times latency reduction after moving into NIMs for generative AI, call transcription, and classification. Over 150 partners are embedding NIMs across every layer of the AI ecosystem. We announced NIM agent Blueprints, a catalog of
5,814
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Over 150 partners are embedding NIMs across every layer of the AI ecosystem. We announced NIM agent Blueprints, a catalog of customizable reference applications that include a full suite of software for building and deploying enterprise generative AI applications. With NIM agent blueprints, enterprises can refine their AI applications overtime, creating a data-driven AI flywheel. The first NIM agent blueprints include workloads for customer service, computer-aided drug discovery, and enterprise retrieval augmented generation. Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM agent blueprints to enterprises. NVIDIA NIM and NIM agent blueprints are available through the NVIDIA AI enterprise software platform, which has great momentum. We expect our software, SaaS and support revenue to approach a $2 billion annual run rate exiting this year, with NVIDIA AI Enterprise notably contributing to growth. Moving to gaming and AI PCs. Gaming revenue of $2.88 billion increased 9% sequentially and 16% year-on-year. We saw sequential growth in console, notebook, and desktop revenue and demand is strong and growing and channel inventory remains healthy. Every PC with RTX is an AIPC. RTX PCs can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designs from leading PC manufacturers. With 600 AI-powered applications and games and an installed base of 100 million devices, RTX is set to revolutionize consumer experiences with generative AI. NVIDIA ACE, a suite of generative AI technologies is available for RTX, AI PCs. Mecha BREAK is the first game to use NVIDIA ACE, including our small large -- small language model, Minitron-4B optimized on device inference. The NVIDIA gaming ecosystem continues to grow, recently added RTX and DLSS titles including Indiana Jones and the Great Circle, Dune Awakening, and Dragon Age: The Veilguard. The GeForce NOW library continues to expand with total catalog size of over 2,000 titles, the most content of any cloud gaming
5,815
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
The GeForce NOW library continues to expand with total catalog size of over 2,000 titles, the most content of any cloud gaming service. Moving to Pro visualization. Revenue of $454 million was up 6% sequentially and 20% year-on-year. Demand is being driven by AI and graphic use cases, including model fine-tuning and Omniverse-related workloads. Automotive and manufacturing were among the key industry verticals driving growth this quarter. Companies are racing to digitalize workflows to drive efficiency across their operations. The world's largest electronics manufacturer, Foxconn is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. And several large global enterprises, including Mercedes-Benz signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins for factories. We announced new NVIDIA USD NIMs and connectors to open Omniverse to new industries and enable developers to incorporate generative AI Copilots and agents into USD workflows, accelerating their ability to build highly accurate virtual worlds. WPP is implementing USD NIM microservices in its generative AI-enabled content creation pipeline for customers such as the Coca-Cola company. Moving to automotive and robotics, revenue was $346 million, up 5% sequentially and up 37% year-on-year. Year-on-year growth was driven by the new customer ramps in self-driving platforms and increased demand for AI cockpit solutions. At the consumer -- at the Computer Vision and Pattern Recognition conference, NVIDIA won the Autonomous Brand Challenge in the end-to-end driving at-scale category, outperforming more than 400 entries worldwide. Austin Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, Skilled ADI, and Teradyne Robotics are using the NVIDIA Isaac Robotics platform for autonomous robot arms, humanoids, and mobile robots. Now moving to the rest of the P&L. GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new
5,816
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
of the P&L. GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new products within data center and inventory provisions for low-yielding Blackwell material. Sequentially, GAAP and non-GAAP operating expenses were up 12%, primarily reflecting higher compensation-related costs. Cash flow from operations was $14.5 billion. In Q2, we utilized cash of $7.4 billion towards shareholder returns in the form of share repurchases and cash dividends, reflecting the increase in dividend per share. Our Board of Directors recently approved a $50 billion share repurchase authorization to add to our remaining $7.5 billion of authorization at the end of Q2. Let me turn the outlook for the third quarter. Total revenue is expected to be $32.5 billion, plus or minus 2%. Our third-quarter revenue outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products. We expect Blackwell production ramp in Q4. GAAP and non-GAAP gross margins are expected to be 74.4% and 75%, respectively, plus or minus 50 basis points. As our data center mix continues to shift to new products, we expect this trend to continue into the fourth quarter of fiscal 2025. For the full-year, we expect gross margins to be in the mid-70% range. GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively. Full-year operating expenses are expected to grow in the mid-to-upper 40% range as we work on developing our next generation of products. GAAP and non-GAAP other income and expenses are expected to be about $350 million, including gains and losses from non-affiliated investments and publicly-held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items. Further financial detail are included in the CFO commentary and other information available on our IR website. We are now going to open the call for questions. Operator, would you please help us poll for
5,817
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
available on our IR website. We are now going to open the call for questions. Operator, would you please help us poll for questions.
5,818
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: Thank you. [Operator Instructions] And your first question comes from the line of Vivek Arya with Bank of America Securities. Your line is open. Vivek Arya: Thanks for taking my question. Jensen, you mentioned in the prepared comments that there is a change in the Blackwell GPU mask. I'm curious, are there any other incremental changes in back end packaging or anything else? And I think related, you suggested that you could ship several billion dollars of Blackwell in Q4 despite the change in the design. Is it because all these issues will be solved by then? Just help us size what is the overall impact of any changes in Blackwell timing? What that means to your kind of revenue profile and how are customers reacting to it? Jensen Huang: Yes. Thanks, Vivek. The change to the mask is complete. There were no functional changes necessary. And so we're sampling functional samples of Blackwell -- Grace Blackwell in a variety of system configurations as we speak. There are something like 100 different types of Blackwell-based systems that are built that were shown at Computex. And we're enabling our ecosystem to start sampling those. The functionality of Blackwell is as it is, and we expect to start production in Q4. Operator: And your next question comes from the line of Toshiya Hari with Goldman Sachs. Your line is open. Toshiya Hari: Hi, thank you so much for taking the question. Jensen, I had a relatively longer-term question. As you may know, there's a pretty heated debate in the market on your customers and customer's customers return on investment and what that means for the sustainability of CapEx going forward. Internally at NVIDIA, like what are you guys watching? What's on your dashboard as you try to gauge customer return and how that impacts CapEx? And then a quick follow-up maybe for Colette. I think your sovereign AI number for the full-year went up maybe a couple of billion. What's driving the improved outlook? And how should we think about fiscal '26? Thank you.
5,819
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Thanks, Toshiya. First of all, when I said ship production in Q4, I mean shipping out. I don't mean starting to ship, but I mean -- I don't mean starting production, but shipping out. On the longer-term question, let's take a step-back and you've heard me say that we're going through two simultaneous platform transitions at the same time. The first one is transitioning from accelerated computing to -- from general-purpose computing to accelerated computing. And the reason for that is because CPU scaling has been known to be slowing for some time. And it is slow to a crawl. And yet the amount of computing demand continues to grow quite significantly. You could maybe even estimate it to be doubling every single year. And so if we don't have a new approach, computing inflation would be driving up the cost for every company, and it would be driving up the energy consumption of data centers around the world. In fact, you're seeing that. And so the answer is accelerated computing. We know that accelerated computing, of course, speeds up applications. It also enables you to do computing at a much larger scale, for example, scientific simulations or database processing. But what that translates directly to is lower cost and lower energy consumed. And in fact, this week, there's a blog that came out that talked about a whole bunch of new libraries that we offer. And that's really the core of the first platform transition going from general purpose computing to accelerated computing. And it's not unusual to see someone save 90% of their computing cost. And the reason for that is, of course, you just sped up an application 50x, you would expect the computing cost to decline quite significantly. The second was enabled by accelerated computing because we drove down the cost of training large language models or training deep learning so incredibly, that it is now possible to have gigantic scale models, multi-trillion parameter models, and train it on -- pre-train it on just about the world's knowledge corpus and
5,820
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
scale models, multi-trillion parameter models, and train it on -- pre-train it on just about the world's knowledge corpus and let the model go figure out how to understand a human represent -- human language representation and how to codify knowledge into its neural networks and how to learn reasoning, and so -- which caused the generative AI revolution. Now generative AI, taking a step back about why it is that we went so deeply into it is because it's not just a feature, it's not just a capability, it's a fundamental new way of doing software. Instead of human-engineered algorithms, we now have data. We tell the AI, we tell the model, we tell the computer what's the -- what are the expected answers, What are our previous observations. And then for it to figure out what the algorithm is, what's the function. It learns a universal -- AI is a bit of a universal function approximator and it learns the function. And so you could learn the function of almost anything, you know. And anything that you have that's predictable, anything that has structure, anything that you have previous examples of. And so now here we are with generative AI. It's a fundamental new form of computer science. It's affecting how every layer of computing is done from CPU to GPU, from human-engineered algorithms to machine-learned algorithms. And the type of applications you could now develop and produce is fundamentally remarkable. And there are several things that are happening in generative AI. So the first thing that's happening is the frontier models are growing in quite substantial scale. And they're still seeing -- we're still all seeing the benefits of scaling. And whenever you double the size of a model, you also have to more than double the size of the dataset to go train it. And so the amount of flops necessary in order to create that model goes up quadratically. And so it's not unusual -- it's not unexpected to see that the next-generation models could take 20 -- 10, 20, 40 times more compute than last generation. So we have to
5,821
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
to see that the next-generation models could take 20 -- 10, 20, 40 times more compute than last generation. So we have to continue to drive the generational performance up quite significantly, so we can drive down the energy consumed and drive down the cost necessary to do it. So the first one is, there are larger frontier models trained on more modalities and surprisingly, there are more frontier model makers than last year. And so you have more on more on more. That's one of the dynamics going on in generative AI. The second is although it's below the tip of the iceberg. What we see are ChatGPT, image generators, we see coding. We use a generative AI for coding quite extensively here at NVIDIA now. We, of course, have a lot of digital designers and things like that. But those are kind of the tip of the iceberg. What's below the iceberg are the largest systems -- largest computing systems in the world today, which are -- and you've heard me talk about this in the past, which are recommender systems moving from CPUs, it's now moving from CPUs to generative AI. So recommended systems, ad generation, custom ad generation targeting ads at very large scale and quite hyper targeting search and user-generated content. These are all very large-scale applications have now evolved to generative AI. Of course, the number of generative AI startups is generating tens of billions of dollars of cloud renting opportunities for our cloud partners and sovereign AI. Countries that are now realizing that their data is their natural and national resource and they have to use -- they have to use AI, build their own AI infrastructure so that they could have their own digital intelligence. Enterprise AI, as Colette mentioned earlier, is starting and you might have seen our announcement that the world's leading IT companies are joining us to take the NVIDIA AI enterprise platform to the world's enterprises. The companies that we're talking to. So many of them are just so incredibly excited to drive more productivity out of their
5,822
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
The companies that we're talking to. So many of them are just so incredibly excited to drive more productivity out of their company. And then I -- and then General Robotics. The big transformation last year as we are able to now learn physical AI from watching video and human demonstration and synthetic data generation from reinforcement learning from systems like Omniverse. We are now able to work with just about every robotics companies now to start thinking about start building on general robotics. And so you can see that there are just so many different directions that generative AI is going. And so we're actually seeing the momentum of generative AI accelerating.
5,823
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Colette Kress: And Toshiya, to answer your question regarding sovereign AI and our goals in terms of growth, in terms of revenue, it certainly is a unique and growing opportunity, something that surfaced with generative AI and the desires of countries around the world to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country. So more and more excitement around these models and what they can be specific for those countries. So yes, we are seeing some growth opportunity in front of us. Operator: And your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open. Joe Moore: Great. Thank you. Jensen, in the press release, you talked about Blackwell anticipation being incredible, but it seems like Hopper demand is also really strong. I mean, you're guiding for a very strong quarter without Blackwell in October. So how long do you see sort of coexisting strong demand for both? And can you talk about the transition to Blackwell? Do you see people intermixing clusters? Do you think most of the Blackwell activity is new clusters? Just some sense of what that transition looks like?
5,824
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes. Thanks, Joe. The demand for Hopper is really strong and it's true. The demand for Blackwell is incredible. There's a couple of reasons for that. The first reason is, if you just look at the world's cloud service providers and the amount of GPU capacity they have available, it's basically none. And the reason for that is because they're either being deployed internally for accelerating their own workloads, data processing, for example. Data processing, we hardly ever talk about it because it's mundane. It's not very cool because it doesn't generate a picture or generate words, but almost every single company in the world processes data in the background. And NVIDIA's GPUs are the only accelerators on the planet that process and accelerate data. SQL data, Pandas data science, toolkits like Pandas and the new one, Polars, these are the most popular data processing platforms in the world. And aside from CPUs, which as I've mentioned before, really running out of steam, Nvidia's accelerated computing is really the only way to get boosting performance out of that. And so that's number one is the primary -- the number one use-case long before generative AI came along is the migration of applications one after another to accelerated computing. The second is, of course, the rentals. Their renting capacity to model makers or renting it to startup companies and a generative AI company spends the vast majority of their invested capital into infrastructure so that they could use an AI to help them create products. And so these companies need it now. They just simply can't afford -- you just raise money, you -- they want you to put it to use now. You have processing that you have to do. You can't do it next year, you got to do it today. And so there's a fair -- that's one reason. The second reason for Hopper demand right now is because of the race to the next plateau. So the first person to the next plateau, it gets to be -- gets to introduce a revolutionary level of AI. So the second person who gets there
5,825
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
person to the next plateau, it gets to be -- gets to introduce a revolutionary level of AI. So the second person who gets there is incrementally better or about the same. And so the ability to systematically and consistently race to the next plateau and be the first one there, is how you establish leadership. NVIDIA is constantly doing that and we show that to the world and the GPUs we make and AI factories that we make, the networking systems that we make, the SOCs we create. I mean, we want to set the pace. We want to be consistently the world's best. And that's the reason why, we drive ourselves so hard. And of course, we also want to see our dreams come true, and all of the capabilities that we imagine in the future, and the benefits that we can bring to society. We want to see all that come true. And so these model makers are the same. They're -- of course, they want to be the world's best, they want to be the world's first. And although Blackwell will start shipping out in billions of dollars at the end of this year, the standing up of the capacity is still probably weeks and a month or so away. And so between now and then is a lot of generative AI market dynamic. And so everybody is just really in a hurry. It's either operational reasons that they need it, they need accelerated computing. They don't want to build any more general-purpose computing infrastructure and even Hopper, you know, of course, H200 state-of-the-art. Hopper, if you have a choice between building CPU infrastructure right now for business or Hopper infrastructure for business right now, that decision is relatively clear. And so I think people are just clamoring to transition the trillion dollars of established installed infrastructure to a modern infrastructure in Hopper's state-of-the-art.
5,826
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: And your next question comes from the line of Matt Ramsay with TD Cowen. Your line is open. Matt Ramsay: Thank you very much. Good afternoon, everybody. I wanted to kind of circle back to an earlier question about the debate that investors are having about, I don't know, the ROI on all of this CapEx, and hopefully this question and the distinction will make some sense. But what I'm having discussions about is with like the percentage of folks that you see that are spending all of this money and looking to sort of push the frontier towards AGI convergence and as you just said, a new plateau and capability. And they're going to spend regardless to get to that level of capability because it opens up so many doors for the industry and for their company versus customers that are really, really focused today on CapEx versus ROI. I don't know if that distinction makes sense. I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars in the ground on this new technology and what their priorities are and their time frames are for that investment? Thanks.
5,827
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Thanks, Matt. The people who are investing in NVIDIA infrastructure are getting returns on it right away. It's the best ROI infrastructure -- computing infrastructure investment you can make today. And so one way to think through it, probably the most -- the easiest way to think through is just go back to first principles. You have trillion dollars’ worth of general-purpose computing infrastructure and the question is, do you want to build more of that or not? And for every $1 billion worth of general CPU-based infrastructure that you stand up, you probably rent it for less than $1 billion. And so because it's commoditized. There's already a trillion dollars on the ground. What's the point of getting more? And so the people who are clamoring to get this infrastructure, one, when they build-out Hopper-based infrastructure and soon Blackwell-based infrastructure, they start saving money. That's tremendous return on investment. And the reason why they start saving money is because data processing saves money, data processing is pricing, just a giant part of it already. And so recommended system save money, so on and so forth, okay? And so you start saving money. The second thing is everything you stand-up are going to get rented because so many companies are being founded to create generative AI. And so your capacity gets rented right away and the return on investment of that is really good. And then the third reason is your own business. Do you want to either create the next frontier yourself or your own Internet services benefit from a next-generation ad system or a next-generation recommended system or next-generation search system? So for your own services, for your own stores, for your own user-generated content, social media platforms for your own services, generative AI is also a fast ROI. And so there's a lot of ways you could think through it. But at the core, it's because it is the best computing infrastructure you could put in the ground today. The world of general-purpose computing is
5,828
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
because it is the best computing infrastructure you could put in the ground today. The world of general-purpose computing is shifting to accelerated computing. The world of human-engineered software is moving to generative AI software. If you were to build infrastructure to modernize your cloud and your data centers, build it with accelerated computing NVIDIA. That's the best way to do it.
5,829
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: And your next question comes from the line of Timothy Arcuri with UBS. Your line is open. Timothy Arcuri: Thanks a lot. I had a question on the shape of the revenue growth both near and longer-term. I know, Colette, you did increase OpEx for the year. And if I look at the increase in your purchase commitments and your supply obligations, that's also quite bullish. On the other hand, there's some school of thought that not that many customers really seem ready for liquid cooling and I do recognize that some of these racks can be air-cooled. But Jensen, is that something to consider sort of on the shape of how Blackwell is going to ramp? And then I guess when you look beyond next year, which is obviously going to be a great year and you look into '26, do you worry about any other gating factors like, say, the power, supply-chain or at some point, models start to get smaller? I'm just wondering if you could speak to that? Thanks.
5,830
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: I'm going to work backwards. I really appreciate the question, Tim. So remember, the world is moving from general purpose computing to accelerated computing. And the world builds about $1 trillion dollars’ worth of data centers -- $1 trillion dollars’ worth of data centers in a few years will be all accelerated computing. In the past, no GPUs are in data centers, just CPUs. In the future, every single data center will have GPUs. And the reason for that is very clear is because we need to accelerate workloads so that we can continue to be sustainable, continue to drive down the cost of computing so that when we do more computing our -- we don't experience computing inflation. Second, we need GPUs for a new computing model called generative AI that we can all acknowledge is going to be quite transformative to the future of computing. And so I think working backwards, the way to think about that is the next trillion dollars of the world's infrastructure will clearly be different than the last trillion, and it will be vastly accelerated. With respect to the shape of our ramp, we offer multiple configurations of Blackwell. Blackwell comes in either a Blackwell classic, if you will, that uses the HGX form factor that we pioneered with Volta, and I think it was Volta. And so we've been shipping the HGX form factor for some time. It is air-cooled. The Grace Blackwell is liquid-cooled. However, the number of data centers that want to go liquid-cooled is quite significant. And the reason for that is because we can in a liquid-cooled data center, in any data center -- power limited data center, whatever size data center you choose, you could install and deploy anywhere from 3 times to 5 times, the AI throughput compared to the past. And so liquid cooling is cheaper, liquid cooling TCO is better, and liquid cooling allows you to have the benefit of this capability we call NVLink , which allows us to expand it to 72 Grace Blackwell packages, which has essentially 144 GPUs. And so imagine 144 GPUs connected in
5,831
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
allows us to expand it to 72 Grace Blackwell packages, which has essentially 144 GPUs. And so imagine 144 GPUs connected in NVLink and that is -- we're increasingly showing you the benefits of that. And the next click is obviously a very low-latency, very-high throughput, large language model inference. And the large domain is going to be a game-changer for that. And so I think people are very comfortable deploying both. And so almost every CSP we're working with are deploying some of both. And so I -- I'm pretty confident that we'll ramp it up just fine. Your second question out of the third is that looking-forward, yes, next year is going to be a great year. We expect to grow our data center business quite significantly next year. Blackwell is going to be going to be a complete game-changer for the industry. And Blackwell is going to carry into the following year. And as I mentioned earlier, working backwards from first principles. Remember that computing is going through two platform transitions at the same time and that's just really, really important to keep your head on -- your mind focused on, which is general-purpose computing is shifting to accelerated computing and human engineered software is going to transition to generative AI or artificial intelligence learned software.
5,832
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: And your next question comes from the line of Stacy Rasgon with Bernstein Research. Your line is open. Stacy Rasgon: Hi guys. Thanks for taking my questions. I have two short questions for Colette. The first, several billion dollars of Blackwell revenue in Q4. I guess is that additive? You said you expected Hopper demand to strengthen in the second half. Does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars? And the second question on gross margins, if I have mid-70s for the year, explaining where I want to draw that. If I have 75% for the year, I'd be something like 71% to 72% for Q4 somewhere in that range. Is that the kind of exit rate for gross margins that you're expecting? And how should we think about the drivers of gross margin evolution into next year as Blackwell ramps? And I mean, hopefully, I guess the yields and the inventory reserves and everything come up.
5,833
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Colette Kress: Yes. So Stacy, let's first take your question that you had about Hopper and Blackwell. So we believe our Hopper will continue to grow into the second half. We have many new products for Hopper, our existing products for Hopper that we believe will start continuing to ramp, in the next quarters, including our Q3 and those new products moving to Q4. So let's say Hopper there for versus H1 is a growth opportunity for that. Additionally, we have the Blackwell on top of that, and the Blackwell starting of -- ramping in Q4. So hope that helps you on those two pieces. Your second piece is in terms of on our gross margin. We provided gross margin for our Q3. We provided our gross margin on a non-GAAP at about 75%. We'll work with all the different transitions that we're going through, but we do believe we can do that 75% in Q3. We provided that we're still on track for the full-year also in the mid-70s or approximately the 75%. So we're going to see some slight difference possibly in Q4. Again, with our transitions and the different cost structures that we have on our new product introductions. However, I'm not in the same number that you are there. We don't have exactly guidance, but I do believe you're lower than where we are. Operator: And your next question comes from the line of Ben Reitzes with Melius. Your line is open. Ben Reitzes: Yes, hey, thanks a lot for the question, Jensen and Colette. I wanted to ask about the geographies. There was the 10-Q that came out and the United States was down sequentially while several Asian geographies were up a lot sequentially. Just wondering what the dynamics are there? Obviously, China did very well. You mentioned it in your remarks, what are the puts and takes? And then I just wanted to clarify from Stacy's question, if that means the sequential overall revenue growth rates for the company accelerate in the fourth quarter given all those favorable revenue dynamics? Thanks.
5,834
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Colette Kress: Let me talk about a bit in terms of our disclosure in terms of the 10-Q, a required disclosure, and a choice of geographies. Very challenging sometimes to create that right disclosure as we have to come up with one key piece. Piece is in terms of we have in terms of who we sell to and/or specifically who we invoice to. And so what you're seeing in terms of there is who we invoice. That's not necessarily where the product will eventually be, and where it may even travel to the end-customer. These are just moving to our OEMs, our ODMs, and our system integrators for the most part across our product portfolio. So what you're seeing there is sometimes just a swift shift in terms of who they are using to complete their full configuration before those things are going into the data center, going into notebooks and those pieces of it. And that shift happens from time-to-time. But yes, our China number there are inverse into China, keep in mind that is incorporating both gaming, also data center, also automotive in those numbers that we have. Going back to your statement in regarding gross margin and also what we're seeing in terms of what we're looking at for Hopper and Blackwell in terms of revenue. Hopper will continue to grow in the second half. We'll continue to grow from what we are currently seeing. During -- determining that exact mix in each Q3 and Q4, we don't have here. We are not here to guide yet in terms of Q4. But we do see right now the demand expectations. We do see the visibility that will be a growth opportunity in Q4. On top of that, we will have our Blackwell architecture. Operator: And your next question comes from the line of C.J. Muse with Cantor Fitzgerald. Your line is open.
5,835
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: And your next question comes from the line of C.J. Muse with Cantor Fitzgerald. Your line is open. C.J. Muse: Yes, good afternoon. Thank you for taking the question. You've embarked on a remarkable annual product cadence with challenges only likely becoming more and more given rising complexity and a radical limit in advanced package world. So curious, if you take a step back, how does this backdrop alter your thinking around potentially greater vertical integration, supply chain partnerships and then thinking through consequential impact to your margin profile? Thank you.
5,836
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes, thanks. Let's see. I think the first answer to your -- the answer to your first question is that the reason why our velocity is so high is simultaneously because, the complexity of the model is growing and we want to continue to drive its cost down. It's growing, so we want to continue to increase its scale. And we believe that by continuing to scale the AI models that will reach a level of extraordinary usefulness and that it would open up, I realize the next industrial revolution. We believe it. And so we're going to drive ourselves really hard to continue to go up that scale. We have the ability fairly uniquely to integrate -- to design an AI factory because we have all the parts. It's not possible to come up with a new AI factory every year, unless you have all the parts. And so we have -- next year, we're going to ship a lot more CPUs than we've ever had in the history of our company, more GPUs, of course, but also NVLink switches, CX DPUs, ConnectX EPU for East and West, Bluefield DPUs for North and South and data and storage processing to InfiniBand for supercomputing centers to Ethernet, which is a brand-new product for us, which is well on its way to becoming a multi-billion dollar business to bring AI to Ethernet. And so the fact that we could build -- we have access to all of this. We have one architectural stack, as you know. It allows us to introduce new capabilities to the market as we complete it. Otherwise, what happens is, you ship these parts, you go find customers to sell it to and then you've got to build -- somebody has got to build up an AI factory. And the AI factory has got a mountain of software. And so it's not about who integrates it. We love the fact that our supply chain is disintegrated in the sense that we could service Quanta, Foxconn, HP, Dell, Lenovo, Supermicro. We used to be able to serve as ZT. They were recently purchased and so on and so forth. And so the number of ecosystem partners that we have, Gigabyte, ASUS, the number of ecosystem partners that we
5,837
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
on and so forth. And so the number of ecosystem partners that we have, Gigabyte, ASUS, the number of ecosystem partners that we have that allows us allows us to -- allows them to take our architecture, which all works, but integrated in a bespoke way into all of the world's cloud service providers, enterprise data centers. The scale and reach necessary from our ODMs and our integrator supply-chain is vast and gigantic because the world is huge. And so that part we don't want to do and we're not good at doing. And I -- but we know how to design the AI infrastructure, provided the way that customers would like it and lets the ecosystem integrate it. Well, yes. So anyways, that's the reason why.
5,838
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: And your final question comes from the line of Aaron Rakers with Wells Fargo. Your line is open. Aaron Rakers: Yes, thanks for taking the questions. I wanted to go back into the Blackwell product cycle. One of the questions that we tend to get asked is how you see the Rack Scale system mix dynamic as you think about leveraging NVLink? You think about GB, NVL72, and how that go-to-market dynamic looks as far as the Blackwell product cycle? I guess put this distinctly, how do you see that mix of Rack Scale systems as we start to think about the Blackwell cycle playing out?
5,839
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes, Aaron, thanks. The Blackwell Rack system, it's designed and architected as a rack, but it's sold, in a disaggregated system components. We don't sell the whole rack. And the reason for that is because everybody's rack is a little different, surprisingly. You know, some of them are OCP standards, some of them are not, some of them are enterprise, and the power limits for everybody could be a little different. Choice of CDUs, the choice of power bus bars, the configuration and integration into people's data centers, all different. And so the way we designed it, we architected the whole rack. The software is going to work perfectly across the whole rack. And then we provide the system components, like for example, the CPU and GPU compute board is then integrated into an MGX, it's a modular system architecture. MGX is completely ingenious. And we have MGX ODMs and integrators and OEMs all over the plant. And so just about any configuration you would like, where you would like that 3,000 pound rack to be delivered. It's got to be close to -- it has to be integrated and assembled close to the data center because it's fairly heavy. And so everything from the supply chain from the moment that we ship the GPU, CPUs, the switches, the NICs. From that point forward, the integration is done quite close to the location of the CSPs and the locations of the data centers. And so you can imagine how many data centers in the world there are and how many logistics hubs, we've scaled out to with our ODM partners. And so I think that because we show it as one rack and because it's always rendered that way and shown that way, we might have left the impression that we're doing the integration. Our customers hate that we do integration. The supply-chain hates us doing integration. They want to do the integration. That's their value-added. There's a final design in, if you will. It's not quite as simple as shimmy into a data center, but the design fit in is really complicated. And so the install -- the design fit-in,
5,840
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
as simple as shimmy into a data center, but the design fit in is really complicated. And so the install -- the design fit-in, the installation, the bring-up, the repair and replace, that entire cycle is done all over the world. And we have a sprawling network of ODM and OEM partners that does this incredibly well. So integration is not the reason why we're doing racks. It's the anti-reason of doing it. The way we don't want to be an integrator. We want to be a technology provider.
5,841
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: And I will now turn the call back over to Jensen Huang for closing remarks.
5,842
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Thank you. Let me make a couple more -- make a couple of comments that I made earlier again. That data center worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative AI. Hopper demand remains strong and the anticipation for Blackwell is incredible. Let me highlight the top-five things, the top-five things of our company. Accelerated computing has reached a tipping point, CPU scaling slows, developers must accelerate everything possible. Accelerated computing starts with CUDA-X libraries. New libraries open new markets for NVIDIA. We released many new libraries, including accelerated Polars, Pandas, and Spark, the leading data science and data processing libraries, QVS for vector product vector databases. This is incredibly hot right now. Aerial and Sionna for 5G wireless base station, a whole suite of -- a whole world of data centers that we can go into now. Parabricks for gene sequencing and Alpha-fold two for protein structure prediction is now CUDA accelerated. We are at the beginning of our journey to modernize a $1 trillion dollars’ worth of data centers from general-purpose computing to accelerated computing. That's number-one. Number two, Blackwell is a step-function leap over Hopper. Blackwell is an AI infrastructure platform, not just a GPU. It also happens to be in the name of our GPU, but it's an AI infrastructure platform. As we reveal more of Blackwell and sample systems to our partners and customers, the extent of Blackwell's lead becomes clear. The Blackwell Vision took nearly five years and seven one-of-a-kind chips to realize. The Grace CPU, the Blackwell dual GPU, and a coVS package, ConnectX DPU for East-West traffic, BlueField DPU for North-South and storage traffic, NVLink switch for all-to-all GPU communications, and Quantum and Spectrum-X for both InfiniBand and Ethernet can support the massive burst traffic of AI. Blackwell AI factories are building-sized computers. NVIDIA designed and optimized the Blackwell platform
5,843
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
burst traffic of AI. Blackwell AI factories are building-sized computers. NVIDIA designed and optimized the Blackwell platform full stack end-to-end from chips, systems, networking, even structured cables, power and cooling, and mounts of software to make it fast for customers to build AI factories. These are very capital-intensive infrastructures. Customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO. Blackwell provides 3 times to 5 times more AI throughput in a power-limited data center than Hopper. The third is NVLink. This is a very big deal with its all-to-all GPU switch is game-changing. The Blackwell system lets us connect 144 GPUs in 72-GB 200 packages into one NVLink domain, with an aggregate NVLink bandwidth of 259 terabytes per second in one rack. Just to put that in perspective, that's about 10 times higher than Hopper, 259 terabytes per second, it kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens. And so that natural amount of data needs to be moved around from GPU to GPU. For inference, NVLink is vital for low-latency, high-throughput, large language model, token generation. We now have three networking platforms, NVLink for GPU scale-up, Quantum InfiniBand for supercomputing and dedicated AI factories, and Spectrum-X for AI on Ethernet. NVIDIA's networking footprint is much bigger than before. Generative AI momentum is accelerating. Generative AI frontier model makers are racing to scale to the next AI plateau to increase model safety and IQ. We're also scaling to understand more modalities from text, images, and video to 3D, physics, chemistry, and biology. Chatbots, coding AIs, and image generators are growing fast, but it's just the tip of the iceberg. Internet services are deploying generative AI for large-scale recommenders, ad targeting, and search systems. AI start-ups are consuming tens of billions of dollars yearly of CSP's cloud capacity and countries are
5,844
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
and search systems. AI start-ups are consuming tens of billions of dollars yearly of CSP's cloud capacity and countries are recognizing the importance of AI and investing in sovereign AI infrastructure. And NVIDIA AI and NVIDIA Omniverse is opening up the next era of AI general robotics. And now the enterprise AI wave has started and we're poised to help companies transform their businesses. The NVIDIA AI Enterprise platform consists of NeMo, NIMS, NIM agent blueprints, and AI Foundry, that our ecosystem partners the world-leading IT companies used to help customer -- companies customize AI models and build bespoke AI applications. Enterprises can then deploy on NVIDIA AI Enterprise runtime and at $4,500 per GPU per year, NVIDIA AI Enterprise is an exceptional value for deploying AI anywhere. And for NVIDIA's software TAM can be significant as the CUDA-compatible GPU installed-base grows from millions to tens of millions. And as Colette mentioned, NVIDIA software will exit the year at a $2 billion run rate. Thank you all for joining us today.
5,845
NVDA
2
2,025
2024-08-28 17:00:00
NVIDIA Corporation
32,307
Operator: Ladies and gentlemen, this concludes today's call and we thank you for your participation. You may now disconnect.
5,846
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Good afternoon. My name is Regina and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's First Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. Simona Jankowski, you may begin your conference.
5,847
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Simona Jankowski: Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2025. With me today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2025. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 22, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. Let me highlight some upcoming events. On Sunday, June 2nd, ahead of the Computex Technology Trade Show in Taiwan, Jensen will deliver a keynote which will be held in-person in Taipei as well as streamed live. And on June 5th, we will present at the Bank of America Technology Conference in San Francisco. With that let me turn the call over to Colette.
5,848
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Colette Kress: Thanks, Simona. Q1 was another record quarter. Revenue of $26 billion was up 18% sequentially and up 262% year-on-year and well above our outlook of $24 billion. Starting with Data Center. Data Center revenue of $22.6 billion was a record, up 23% sequentially and up 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform. Compute revenue grew more than 5x and networking revenue more than 3x from last year. Strong sequential data center growth was driven by all customer types, led by enterprise and consumer internet companies. Large cloud providers continue to drive strong growth as they deploy and ramp NVIDIA AI infrastructure at scale and represented the mid-40s as a percentage of our Data Center revenue. Training and inferencing AI on NVIDIA CUDA is driving meaningful acceleration in cloud rental revenue growth, delivering an immediate and strong return on cloud provider's investment. For every $1 spent on NVIDIA AI infrastructure, cloud providers have an opportunity to earn $5 in GPU instant hosting revenue over four years. NVIDIA's rich software stack and ecosystem and tight integration with cloud providers makes it easy for end customers up and running on NVIDIA GPU instances in the public cloud. For cloud rental customers, NVIDIA GPUs offer the best time to train models, the lowest cost to train models and the lowest cost to inference large language models. For public cloud providers, NVIDIA brings customers to their cloud, driving revenue growth and returns on their infrastructure investments. Leading LLM companies such as OpenAI, Adept, Anthropic, Character.AI, Cohere, Databricks, DeepMind, Meta, Mistral, xAI, and many others are building on NVIDIA AI in the cloud. Enterprises drove strong sequential growth in Data Center this quarter. We supported Tesla's expansion of their training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD Version 12, their latest autonomous driving
5,849
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD Version 12, their latest autonomous driving software based on Vision. Video Transformers, while consuming significantly more computing, are enabling dramatically better autonomous driving capabilities and propelling significant growth for NVIDIA AI infrastructure across the automotive industry. We expect automotive to be our largest enterprise vertical within Data Center this year, driving a multibillion revenue opportunity across on-prem and cloud consumption. Consumer Internet companies are also a strong growth vertical. A big highlight this quarter was Meta's announcement of Llama 3, their latest large language model, which was trained on a cluster of 24,000 H100 GPUs. Llama 3 powers Meta AI, a new AI assistant available on Facebook, Instagram, WhatsApp and Messenger. Llama 3 is openly available and has kickstarted a wave of AI development across industries. As generative AI makes its way into more consumer Internet applications, we expect to see continued growth opportunities as inference scales both with model complexity as well as with the number of users and number of queries per user, driving much more demand for AI compute. In our trailing four quarters, we estimate that inference drove about 40% of our Data Center revenue. Both training and inference are growing significantly. Large clusters like the ones built by Meta and Tesla are examples of the essential infrastructure for AI production, what we refer to as AI factories. These next-generation data centers host advanced full-stack accelerated computing platforms where the data comes in and intelligence comes out. In Q1, we worked with over 100 customers building AI factories ranging in size from hundreds to tens of thousands of GPUs, with some reaching 100,000 GPUs. From a geographic perspective, Data Center revenue continues to diversify as countries around the world invest in Sovereign AI. Sovereign AI refers to a nation's capabilities to produce artificial
5,850
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
as countries around the world invest in Sovereign AI. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks. Nations are building up domestic computing capacity through various models. Some are procuring and operating Sovereign AI clouds in collaboration with state-owned telecommunication providers or utilities. Others are sponsoring local cloud partners to provide a shared AI computing platform for public and private sector use. For example, Japan plans to invest more than $740 million in key digital infrastructure providers, including KDDI, Sakura Internet, and SoftBank to build out the nation's Sovereign AI infrastructure. France-based, Scaleway, a subsidiary of the Iliad Group, is building Europe's most powerful cloud native AI supercomputer. In Italy, Swisscom Group will build the nation's first and most powerful NVIDIA DGX-powered supercomputer to develop the first LLM natively trained in the Italian language. And in Singapore, the National Supercomputer Center is getting upgraded with NVIDIA Hopper GPUs, while Singtel is building NVIDIA's accelerated AI factories across Southeast Asia. NVIDIA's ability to offer end-to-end compute to networking technologies, full-stack software, AI expertise, and rich ecosystem of partners and customers allows Sovereign AI and regional cloud providers to jumpstart their country's AI ambitions. From nothing the previous year, we believe Sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation. We ramped new products designed specifically for China that don't require an export control license. Our Data Center revenue in China is down significantly from the level prior to the imposition of the new export control restrictions in October. We expect the market in China to remain very competitive going forward. From a product perspective, the vast majority of compute revenue was driven by our Hopper GPU
5,851
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
very competitive going forward. From a product perspective, the vast majority of compute revenue was driven by our Hopper GPU architecture. Demand for Hopper during the quarter continues to increase. Thanks to CUDA algorithm innovations, we've been able to accelerate LLM inference on H100 by up to 3x, which can translate to a 3x cost reduction for serving popular models like Llama 3. We started sampling the H200 in Q1 and are currently in production with shipments on track for Q2. The first H200 system was delivered by Jensen to Sam Altman and the team at OpenAI and powered their amazing GPT-4o demos last week. H200 nearly doubles the inference performance of H100, delivering significant value for production deployments. For example, using Llama 3 with 700 billion parameters, a single NVIDIA HGX H200 server can deliver 24,000 tokens per second, supporting more than 2,400 users at the same time. That means for every $1 spent on NVIDIA HGX H200 servers at current prices per token, an API provider serving Llama 3 tokens can generate $7 in revenue over four years. With ongoing software optimizations, we continue to improve the performance of NVIDIA AI infrastructure for serving AI models. While supply for H100 prove, we are still constrained on H200. At the same time, Blackwell is in full production. We are working to bring up our system and cloud partners for global availability later this year. Demand for H200 and Blackwell is well ahead of supply and we expect demand may exceed supply well into next year. Grace Hopper Superchip is shipping in volume. Last week at the International Supercomputing Conference, we announced that nine new supercomputers worldwide are using Grace Hopper for a combined 200 exaflops of energy-efficient AI processing power delivered this year. These include the Alps Supercomputer at the Swiss National Supercomputing Center, the fastest AI supercomputer in Europe. Isambard-AI at the University of Bristol in the UK and JUPITER in the Julich Supercomputing Center in Germany. We are seeing an
5,852
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Isambard-AI at the University of Bristol in the UK and JUPITER in the Julich Supercomputing Center in Germany. We are seeing an 80% attach rate of Grace Hopper in supercomputing due to its high energy efficiency and performance. We are also proud to see supercomputers powered with Grace Hopper take the number one, the number two, and the number three spots of the most energy-efficient supercomputers in the world. Strong networking year-on-year growth was driven by InfiniBand. We experienced a modest sequential decline, which was largely due to the timing of supply, with demand well ahead of what we were able to ship. We expect networking to return to sequential growth in Q2. In the first quarter, we started shipping our new Spectrum-X Ethernet networking solution optimized for AI from the ground up. It includes our Spectrum-4 switch, BlueField-3 DPU, and new software technologies to overcome the challenges of AI on Ethernet to deliver 1.6x higher networking performance for AI processing compared with traditional Ethernet. Spectrum-X is ramping in volume with multiple customers, including a massive 100,000 GPU cluster. Spectrum-X opens a brand-new market to NVIDIA networking and enables Ethernet only data centers to accommodate large-scale AI. We expect Spectrum-X to jump to a multibillion-dollar product line within a year. At GTC in March, we launched our next-generation AI factory platform, Blackwell. The Blackwell GPU architecture delivers up to 4x faster training and 30x faster inference than the H100 and enables real-time generative AI on trillion-parameter large language models. Blackwell is a giant leap with up to 25x lower TCO and energy consumption than Hopper. The Blackwell platform includes the fifth-generation NVLink with a multi-GPU spine and new InfiniBand and Ethernet switches, the X800 series designed for a trillion parameter scale AI. Blackwell is designed to support data centers universally, from hyperscale to enterprise, training to inference, x86 to Grace CPUs, Ethernet to InfiniBand
5,853
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
data centers universally, from hyperscale to enterprise, training to inference, x86 to Grace CPUs, Ethernet to InfiniBand networking, and air cooling to liquid cooling. Blackwell will be available in over 100 OEM and ODM systems at launch, more than double the number of Hopper's launch and representing every major computer maker in the world. This will support fast and broad adoption across the customer types, workloads and data center environments in the first year shipments. Blackwell time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI. We announced a new software product with the introduction of NVIDIA Inference Microservices or NIM. NIM provides secure and performance-optimized containers powered by NVIDIA CUDA acceleration in network computing and inference software, including Triton Inference Server and TensorRT LLM with industry-standard APIs for a broad range of use cases, including large language models for text, speech, imaging, vision, robotics, genomics and digital biology. They enable developers to quickly build and deploy generative AI applications using leading models from NVIDIA, AI21, Adept, Cohere, Getty Images, and Shutterstock and open models from Google, Hugging Face, Meta, Microsoft, Mistral AI, Snowflake and Stability AI. NIMs will be offered as part of our NVIDIA AI enterprise software platform for production deployment in the cloud or on-prem. Moving to gaming and AI PCs. Gaming revenue of $2.65 billion was down 8% sequentially and up 18% year-on-year, consistent with our outlook for a seasonal decline. The GeForce RTX Super GPUs market reception is strong and end demand and channel inventory remained healthy across the product range. From the very start of our AI journey, we equipped GeForce RTX GPUs with CUDA Tensor Cores. Now with over 100 million of an installed base, GeForce RTX GPUs are perfect for gamers, creators, AI enthusiasts and offer unmatched performance for running generative AI applications on PCs. NVIDIA has full technology
5,854
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
AI enthusiasts and offer unmatched performance for running generative AI applications on PCs. NVIDIA has full technology stack for deploying and running fast and efficient generative AI inference on GeForce RTX PCs. TensorRT LLM now accelerates Microsoft's Phi-3-Mini model and Google's Gemma 2B and 7B models as well as popular AI frameworks, including LangChain and LlamaIndex. Yesterday, NVIDIA and Microsoft announced AI performance optimizations for Windows to help run LLMs up to 3x faster on NVIDIA GeForce RTX AI PCs. And top game developers, including NetEase Games, Tencent and Ubisoft are embracing NVIDIA Avatar Character Engine to create lifelike avatars to transform interactions between gamers and nonplayable characters. Moving to ProVis. Revenue of $427 million was down 8% sequentially and up 45% year-on-year. We believe generative AI and Omniverse industrial digitalization will drive the next wave of professional visualization growth. At GTC, we announced new Omniverse Cloud APIs to enable developers to integrate Omniverse industrial digital twin and simulation technologies into their applications. Some of the world's largest industrial software makers are adopting these APIs, including ANSYS, Cadence, 3DEXCITE at Dassault Systemes, Brand and Siemens. And developers can use them to stream industrial digital twins with spatial computing devices such as Apple Vision Pro. Omniverse Cloud APIs will be available on Microsoft Azure later this year. Companies are using Omniverse to digitalize their workflows. Omniverse power digital twins enable Wistron, one of our manufacturing partners to reduce end-to-end production cycle times by 50% and defect rates by 40%. And BYD, the world's largest electric vehicle maker, is adopting Omniverse for virtual factory planning and retail configurations. Moving to automotive. Revenue was $329 million, up 17% sequentially and up 11% year-on-year. Sequential growth was driven by the ramp of AI cockpit solutions with global OEM customers and strength in our self-driving
5,855
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Sequential growth was driven by the ramp of AI cockpit solutions with global OEM customers and strength in our self-driving platforms. Year-on-year growth was driven primarily by self-driving. We supported Xiaomi in the successful launch of its first electric vehicle, the SU7 sedan built on the NVIDIA DRIVE Orin, our AI car computer for software-defined AV fleets. We also announced a number of new design wins on NVIDIA DRIVE Thor, the successor to Orin, powered by the new NVIDIA Blackwell architecture with several leading EV makers, including BYD, XPeng, GAC's Aion Hyper and Neuro. DRIVE Thor is slated for production vehicles starting next year. Okay. Moving to the rest of the P&L. GAAP gross margin expanded sequentially to 78.4% and non-GAAP gross margins to 78.9% on lower inventory targets. As noted last quarter, both Q4 and Q1 benefited from favorable component costs. Sequentially, GAAP operating expenses were up 10% and non-GAAP operating expenses were up 13%, primarily reflecting higher compensation-related costs and increased compute and infrastructure investments. In Q1, we returned $7.8 billion to shareholders in the form of share repurchases and cash dividends. Today, we announced a 10-for-1 split of our shares with June 10th as the first day of trading on a split-adjusted basis. We are also increasing our dividend by 150%. Let me turn to the outlook for the second quarter. Total revenue is expected to be $28 billion, plus or minus 2%. We expect sequential growth in all market platforms. GAAP and non-GAAP gross margins are expected to be 74.8% and 75.5%, respectively, plus or minus 50 basis points, consistent with our discussion last quarter. For the full year, we expect gross margins to be in the mid-70s percent range. GAAP and non-GAAP operating expenses are expected to be approximately $4 billion and $2.8 billion, respectively. Full year OpEx is expected to grow in the low 40% range. GAAP and non-GAAP other income and expenses are expected to be an income of approximately, excuse me, approximately
5,856
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
40% range. GAAP and non-GAAP other income and expenses are expected to be an income of approximately, excuse me, approximately $300 million, excluding gains and losses from nonaffiliated investments. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website. I would like to now turn it over to Jensen as he would like to make a few comments.
5,857
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Thanks, Colette. The industry is going through a major change. Before we start Q&A, let me give you some perspective on the importance of the transformation. The next industrial revolution has begun. Companies and countries are partnering with NVIDIA to shift the trillion-dollar installed base of traditional data centers to accelerated computing and build a new type of data center, AI factories, to produce a new commodity, artificial intelligence. AI will bring significant productivity gains to nearly every industry and help companies be more cost and energy efficient while expanding revenue opportunities. CSPs were the first generative AI movers. With NVIDIA, CSPs accelerated workloads to save money and power. The tokens generated by NVIDIA Hopper drive revenues for their AI services. And NVIDIA cloud instances attract rental customers from our rich ecosystem of developers. Strong and accelerated demand -- accelerating demand for generative AI training and inference on Hopper platform propels our Data Center growth. Training continues to scale as models learn to be multimodal, understanding text, speech, images, video and 3D and learn to reason and plan. Our inference workloads are growing incredibly. With generative AI, inference, which is now about fast token generation at massive scale, has become incredibly complex. Generative AI is driving a from-foundation-up full stack computing platform shift that will transform every computer interaction. From today's information retrieval model, we are shifting to an answers and skills generation model of computing. AI will understand context and our intentions, be knowledgeable, reason, plan and perform tasks. We are fundamentally changing how computing works and what computers can do, from general purpose CPU to GPU accelerated computing, from instruction-driven software to intention-understanding models, from retrieving information to performing skills, and at the industrial level, from producing software to generating tokens, manufacturing digital
5,858
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
to performing skills, and at the industrial level, from producing software to generating tokens, manufacturing digital intelligence. Token generation will drive a multiyear build-out of AI factories. Beyond cloud service providers, generative AI has expanded to consumer Internet companies and enterprise, Sovereign AI, automotive, and health care customers, creating multiple multibillion-dollar vertical markets. The Blackwell platform is in full production and forms the foundation for trillion-parameter scale generative AI. The combination of Grace CPU, Blackwell GPUs, NVLink, Quantum, Spectrum, mix and switches, high-speed interconnects and a rich ecosystem of software and partners let us expand and offer a richer and more complete solution for AI factories than previous generations. Spectrum-X opens a brand-new market for us to bring large-scale AI to Ethernet-only data centers. And NVIDIA NIMs is our new software offering that delivers enterprise-grade optimized generative AI to run on CUDA everywhere, from the cloud to on-prem data centers to RTX AI PCs through our expansive network of ecosystem partners. From Blackwell to Spectrum-X to NIMs, we are poised for the next wave of growth. Thank you.
5,859
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Simona Jankowski: Thank you, Jensen. We will now open the call for questions. Operator, could you please poll for questions? Operator: [Operator Instructions] Your first question comes from the line of Stacy Rasgon with Bernstein. Please go ahead. Stacy Rasgon: Hi, guys. Thanks for taking my questions. My first one, I wanted to drill a little bit into the Blackwell comment that it's in full production now. What does that suggest with regard to shipments and delivery timing if that product is -- doesn't sound like it's sampling anymore. What does that mean when that's actually in customers' hands if it's in production now? Jensen Huang: We will be shipping. Well, we've been in production for a little bit of time. But our production shipments will start in Q2 and ramp in Q3, and customers should have data centers stood up in Q4. Stacy Rasgon: Got it. So this year, we will see Blackwell revenue, it sounds like? Jensen Huang: We will see a lot of Blackwell revenue this year. Operator: Our next question will come from the line of Timothy Arcuri with UBS. Please go ahead. Timothy Arcuri: Thanks a lot. I wanted to ask, Jensen, about the deployment of Blackwell versus Hopper just between the systems nature and all the demand for GB that you have. How does the deployment of this stuff differ from Hopper? I guess I ask because liquid cooling at scale hasn't been done before, and there's some engineering challenges both at the node level and within the data center. So do these complexities sort of elongate the transition? And how do you sort of think about how that's all going? Thanks.
5,860
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes. Blackwell comes in many configurations. Blackwell is a platform, not a GPU. And the platform includes support for air cooled, liquid cooled, x86 and Grace, InfiniBand, now Spectrum-X and very large NVLink domain that I demonstrated at GTC, that I showed at GTC. And so for some customers, they will ramp into their existing installed base of data centers that are already shipping Hoppers. They will easily transition from H100 to H200 to B100. And so Blackwell systems have been designed to be backwards compatible, if you will, electrically, mechanically. And of course, the software stack that runs on Hopper will run fantastically on Blackwell. We also have been priming the pump, if you will, with the entire ecosystem, getting them ready for liquid cooling. We've been talking to the ecosystem about Blackwell for quite some time. And the CSPs, the data centers, the ODMs, the system makers, our supply chain beyond them, the cooling supply chain base, liquid cooling supply chain base, data center supply chain base, no one is going to be surprised with Blackwell coming and the capabilities that we would like to deliver with Grace Blackwell 200. GB200 is going to be exceptional. Operator: Our next question will come from the line of Vivek Arya with Bank of America Securities. Please go ahead. Vivek Arya: Thanks for taking my question. Jensen, how are you ensuring that there is enough utilization of your products and that there isn't a pull-ahead or holding behavior because of tight supply, competition or other factors? Basically, what checks have you built in the system to give us confidence that monetization is keeping pace with your really very strong shipment growth?
5,861
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Well, I guess, there's the big picture view that I'll come to, and then, but I'll answer your question directly. The demand for GPUs in all the data centers is incredible. We're racing every single day. And the reason for that is because applications like ChatGPT and GPT-4o, and now it's going to be multi-modality and Gemini and its ramp and Anthropic and all of the work that's being done at all the CSPs are consuming every GPU that's out there. There's also a long line of generative AI startups, some 15,000, 20,000 startups that in all different fields from multimedia to digital characters, of course, all kinds of design tool application -- productivity applications, digital biology, the moving of the AV industry to video, so that they can train end-to-end models, to expand the operating domain of self-driving cars. The list is just quite extraordinary. We're racing actually. Customers are putting a lot of pressure on us to deliver the systems and stand it up as quickly as possible. And of course, I haven't even mentioned all of the Sovereign AIs who would like to train all of their regional natural resource of their country, which is their data to train their regional models. And there's a lot of pressure to stand those systems up. So anyhow, the demand, I think, is really, really high and it outstrips our supply. Longer term, that's what -- that's the reason why I jumped in to make a few comments. Longer term, we're completely redesigning how computers work. And this is a platform shift. Of course, it's been compared to other platform shifts in the past. But time will clearly tell that this is much, much more profound than previous platform shifts. And the reason for that is because the computer is no longer an instruction-driven only computer. It's an intention-understanding computer. And it understands, of course, the way we interact with it, but it also understands our meaning, what we intend that we asked it to do and it has the ability to reason, inference iteratively to process a plan and
5,862
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
meaning, what we intend that we asked it to do and it has the ability to reason, inference iteratively to process a plan and come back with a solution. And so every aspect of the computer is changing in such a way that instead of retrieving prerecorded files, it is now generating contextually relevant intelligent answers. And so that's going to change computing stacks all over the world. And you saw a build that, in fact, even the PC computing stack is going to get revolutionized. And this is just the beginning of all the things that -- what people see today are the beginning of the things that we're working in our labs and the things that we're doing with all the startups and large companies and developers all over the world. It's going to be quite extraordinary.
5,863
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Our next question will come from the line of Joe Moore with Morgan Stanley. Please go ahead. Joseph Moore: Great. Thank you. I understand what you just said about how strong demand is. You have a lot of demand for H200 and for Blackwell products. Do you anticipate any kind of pause with Hopper and H100 as you sort of migrate to those products? Will people wait for those new products, which would be a good product to have? Or do you think there's enough demand for H100 to sustain growth? Jensen Huang: We see increasing demand of Hopper through this quarter. And we expect to be -- we expect demand to outstrip supply for some time as we now transition to H200, as we transition to Blackwell. Everybody is anxious to get their infrastructure online. And the reason for that is because they're saving money and making money, and they would like to do that as soon as possible. Operator: Our next question will come from the line of Toshiya Hari with Goldman Sachs. Please go ahead. Toshiya Hari: Hi. Thank you so much for taking the question. Jensen, I wanted to ask about competition. I think many of your cloud customers have announced new or updates to their existing internal programs, right, in parallel to what they're working on with you guys. To what extent did you consider them as competitors, medium to long term? And in your view, do you think they're limited to addressing most internal workloads or could they be broader in what they address going forward? Thank you.
5,864
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: We're different in several ways. First, NVIDIA's accelerated computing architecture allows customers to process every aspect of their pipeline from unstructured data processing to prepare it for training, to structured data processing, data frame processing like SQL to prepare for training, to training to inference. And as I was mentioning in my remarks, that inference has really fundamentally changed, it's now generation. It's not trying to just detect the cat, which was plenty hard in itself, but it has to generate every pixel of a cat. And so the generation process is a fundamentally different processing architecture. And it's one of the reasons why TensorRT LLM was so well received. We improved the performance in using the same chips on our architecture by a factor of three. That kind of tells you something about the richness of our architecture and the richness of our software. So one, you could use NVIDIA for everything, from computer vision to image processing, the computer graphics to all modalities of computing. And as the world is now suffering from computing cost and computing energy inflation because general-purpose computing has run its course, accelerated computing is really the sustainable way of going forward. So accelerated computing is how you're going to save money in computing, is how you're going to save energy in computing. And so the versatility of our platform results in the lowest TCO for their data center. Second, we're in every cloud. And so for developers that are looking for a platform to develop on, starting with NVIDIA is always a great choice. And we're on-prem, we're in the cloud. We're in computers of any size and shape. We're practically everywhere. And so that's the second reason. The third reason has to do with the fact that we build AI factories. And this is becoming more an apparent to people that AI is not a chip problem only. It starts, of course, with very good chips and we build a whole bunch of chips for our AI factories, but it's a systems problem. In
5,865
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
of course, with very good chips and we build a whole bunch of chips for our AI factories, but it's a systems problem. In fact, even AI is now a systems problem. It's not just one large language model. It's a complex system of a whole bunch of large language models that are working together. And so the fact that NVIDIA builds this system causes us to optimize all of our chips to work together as a system, to be able to have software that operates as a system, and to be able to optimize across the system. And just to put it in perspective in simple numbers, if you had a $5 billion infrastructure and you improved the performance by a factor of two, which we routinely do, when you improve the infrastructure by a factor of two, the value too is $5 billion. All the chips in that data center doesn't pay for it. And so the value of it is really quite extraordinary. And this is the reason why today, performance matters everything. This is at a time when the highest performance is also the lowest cost because the infrastructure cost of carrying all of these chips cost a lot of money. And it takes a lot of money to fund the data center, to operate the data center, the people that goes along with it, the power that goes along with it, the real estate that goes along with it, and all of it adds up. And so the highest performance is also the lowest TCO.
5,866
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Our next question will come from the line of Matt Ramsay with TD Cowen. Please go ahead. Matthew Ramsay: Thank you very much. Good afternoon, everyone. Jensen, I've been in the data center industry my whole career. I've never seen the velocity that you guys are introducing new platforms at the same combination of the performance jumps that you're getting, I mean, 5x in training. Some of the stuff you talked about at GTC up to 30x in inference. And it's an amazing thing to watch but, it also creates an interesting juxtaposition where the current generation of product that your customers are spending billions of dollars on, it's going to be not as competitive with your new stuff, very, very much more quickly than the depreciation cycle of that product. So I'd like you to -- if you wouldn't mind speak a little bit about how you're seeing that situation evolve itself with customers. As you move to Blackwell, you're going to have very large installed bases, obviously software compatible, but large installed bases of product that's not nearly as performant as your new generation stuff. And it'd be interesting to hear what you see happening with customers along that path. Thank you.
5,867
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes. I really appreciate it. Three points that I'd like to make. If you're 5% into the build-out versus if you're 95% into the build out, you're going to feel very differently. And because you're only 5% into the build-out anyhow, you build as fast as you can. And when Blackwell comes, it's going to be terrific. And then after Blackwell, as you mentioned, we have other Blackwells coming. And then there's a short -- we're in a one-year rhythm as we've explained to the world. And we want our customers to see our road map for as far as they like, but they're early in their build-out anyways and so they had to just keep on building, okay. And so there's going to be a whole bunch of chips coming at them, and they just got to keep on building and just, if you will, performance average your way into it. So that's the smart thing to do. They need to make money today. They want to save money today. And time is really, really valuable to them. Let me give you an example of time being really valuable, why this idea of standing up a data center instantaneously is so valuable and getting this thing called time to train is so valuable. The reason for that is because the next company who reaches the next major plateau gets to announce a groundbreaking AI. And the second one after that gets to announce something that's 0.3% better. And so the question is, do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better? And that's the reason why this race, as in all technology races, the race is so important. And you're seeing this race across multiple companies because this is so vital to have technology leadership, for companies to trust the leadership and want to build on your platform and know that the platform that they're building on is going to get better and better. And so leadership matters a great deal. Time to train matters a great deal. The difference between time to train that is three months earlier just to get it done, in order to get time to train on
5,868
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
deal. The difference between time to train that is three months earlier just to get it done, in order to get time to train on three-months project, getting started three months earlier is everything. And so it's the reason why we're standing up Hopper systems like mad right now because the next plateau is just around the corner. And so that's the second reason. The first comment that you made is really a great comment, which is how is it that we're doing -- we're moving so fast and advancing them quickly? Because we have all the stacks here. We literally build the entire data center and we can monitor everything, measure everything, optimize across everything. We know where all the bottlenecks are. We're not guessing about it. We're not putting up PowerPoint slides that look good. We're actually -- we also like our PowerPoint slides look good, but we're delivering systems that perform at scale. And the reason why we know they perform at scale is because we built it all here. Now one of the things that we do that's a bit of a miracle is that we build entire AI infrastructure here, but then we disaggregated and integrated into our customers' data centers however they liked. But we know how it's going to perform and we know where the bottlenecks are. We know where we need to optimize with them and we know where we have to help them improve their infrastructure to achieve the most performance. This deep intimate knowledge at the entire data center scale is fundamentally what sets us apart today. We build every single chip from the ground up. We know exactly how processing is done across the entire system. And so we understand exactly how it's going to perform and how to get the most out of it with every single generation. So I appreciate. Those are the three points.
5,869
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Your next question will come from the line of Mark Lipacis with Evercore ISI. Please go ahead. Mark Lipacis: Hi. Thanks for taking my question. Jensen, in the past, you've made the observation that general-purpose computing ecosystems typically dominated each computing era. And I believe the argument was that they could adapt to different workloads, get higher utilization, drive cost of compute cycle down. And this is a motivation for why you were driving to a general-purpose GPU CUDA ecosystem for accelerated computing. And if I mischaracterized that observation, please do let me know. So the question is, given that the workloads that are driving demand for your solutions are being driven by neural network training and inferencing, which on the surface seem like a limited number of workloads, then it might also seem to lend themselves to custom solutions. And so then the question is about does the general purpose computing framework become more at risk or is there enough variability or a rapid enough evolution on these workloads that support that historical general purpose framework? Thank you.
5,870
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes. NVIDIA's accelerated computing is versatile, but I wouldn't call it general-purpose. Like for example, we wouldn't be very good at running the spreadsheet. That was really designed for general-purpose computing. And so there is a -- the control loop of an operating system code probably isn't fantastic for general-purpose compute, not for accelerated computing. And so I would say that we're versatile, and that's usually the way I describe it. There's a rich domain of applications that we're able to accelerate over the years, but they all have a lot of commonalities. Maybe some deep differences, but commonalities. They're all things that I can run in parallel, they're all heavily threaded. 5% of the code represents 99% of the run-time, for example. Those are all properties of accelerated computing. The versatility of our platform and the fact that we design entire systems is the reason why over the course of the last 10 years or so, the number of start-ups that you guys have asked me about in these conference calls is fairly large. And every single one of them, because of the brittleness of their architecture, the moment generative AI came along or the moment the fusion models came along, the moment the next models are coming along now. And now all of a sudden, look at this, large language models with memory because the large language model needs to have memory so they can carry on a conversation with you, understand the context. All of a sudden, the versatility of the Grace memory became super important. And so each one of these advances in generative AI and the advancement of AI really begs for not having a widget that's designed for one model. But to have something that is really good for this entire domain, properties of this entire domain, but obeys the first principles of software, that software is going to continue to evolve, that software is going to keep getting better and bigger. We believe in the scaling of these models. There's a lot of reasons why we're going to scale by easily a
5,871
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
better and bigger. We believe in the scaling of these models. There's a lot of reasons why we're going to scale by easily a million times in the coming few years for good reasons, and we're looking forward to it and we're ready for it. And so the versatility of our platform is really quite key. And it's not -- if you're too brittle and too specific, you might as well just build an FPGA or you build an ASIC or something like that, but that's hardly a computer.
5,872
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Our next question will come from the line of Blayne Curtis with Jefferies. Please go ahead. Blayne Curtis: Thanks for taking my question. Actually kind of curious, I mean, being supply constrained, how do you think about , I mean, you came out with a product for China, H20. I'm assuming there'd be a ton of demand for it, but obviously, you're trying to serve your customers with the other Hopper products. Just kind of curious how you're thinking about that in the second half. You could elaborate any impact, what you're thinking for sales as well as gross margin. Jensen Huang: I didn't hear your questions. Something bleeped out. Simona Jankowski: H20 and how you're thinking about allocating supply between the different Hopper products. Jensen Huang: Well, we have customers that we honor and we do our best for every customer. It is the case that our business in China is substantially lower than the levels of the past. And it's a lot more competitive in China now because of the limitations on our technology. And so those matters are true. However, we continue to do our best to serve the customers in the markets there and to the best of our ability, we'll do our best. But I think overall, the comments that we made about demand outstripping supply is for the entire market and particularly so for H200 and Blackwell towards the end of the year. Operator: Our next question will come from the line of Srini Pajjuri with Raymond James. Please go ahead. Srini Pajjuri: Thank you. Jensen, actually more of a clarification on what you said. GB 200 systems, it looks like there is a significant demand for systems. Historically, I think you've sold a lot of HGX boards and some GPUs and the systems business was relatively small. So I'm just curious, why is it that now you are seeing such a strong demand for systems going forward? Is it just the TCO or is it something else or is it just the architecture? Thank you.
5,873
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Yes. I appreciate that. In fact, the way we sell GB200 is the same. We disaggregate all of the components that make sense and we integrate it into computer makers. We have 100 different computer system configurations that are coming this year for Blackwell. And that is off the charts. Hopper, frankly, had only half, but that's at its peak. It started out with way less than that even. And so you're going to see liquid cooled version, air cooled version, x86 visions, Grace versions, so on and so forth. There's a whole bunch of systems that are being designed. And they're offered from all of our ecosystem of great partners. Nothing has really changed. Now of course, the Blackwell platform has expanded our offering tremendously. The integration of CPUs and the much more compressed density of computing, liquid cooling is going to save data centers a lot of money in provisioning power and not to mention to be more energy efficient. And so it's a much better solution. It's more expansive, meaning that we offer a lot more components of a data center and everybody wins. The data center gets much higher performance, networking from networking switches, networking. Of course, NICs, we have Ethernet now so that we can bring NVIDIA AI to a large-scale NVIDIA AI to customers who only operate only know how to operate Ethernet because of the ecosystem that they have. And so Blackwell is much more expansive. We have a lot more to offer our customers this generation around. Operator: Our next question will come from the line William Stein with Truist Securities. Please go ahead.
5,874
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Our next question will come from the line William Stein with Truist Securities. Please go ahead. William Stein: Great. Thanks for taking my question. Jensen, at some point, NVIDIA decided that while there are reasonably good CPUs available for data center operations, your ARM-based Grace CPU provides some real advantage that made that technology worth delivering to customers, perhaps related to cost or power consumption or technical synergies between Grace and Hopper, Grace and Blackwell. Can you address whether there could be a similar dynamic that might emerge on the client side, whereby while there are very good solutions, you've highlighted that Intel and AMD are very good partners and deliver great products in x86, but there might be some, especially in emerging AI workloads, some advantage that NVIDIA can deliver that others have more of a challenge?
5,875
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Well, you mentioned some really good reasons. It is true that for many of the applications, our partnership with x86 partners are really terrific and we build excellent systems together. But Grace allows us to do something that isn't possible with the configuration, the system configuration today. The memory system between Grace and Hopper are coherent and connected. The interconnect between the two chips, calling it two chips is almost weird because it's like a superchip. The two of them are connected with this interface that's like a terabytes per second. It's off the charts. And the memory that's used by Grace is LPDDR. It's the first data center-grade low-power memory. And so we save a lot of power on every single node. And then finally, because of the architecture, because we can create our own architecture with the entire system now, we could create something that has a really large NVLink domain, which is vitally important to the next-generation large language models for inferencing. And so you saw that GB200 has a 72-node NVLink domain. That's like 72 Blackwells connected together into one giant GPU. And so we needed Grace Blackwells to be able to do that. And so there are architectural reasons, there are software programming reasons and then there are system reasons that are essential for us to build them that way. And so if we see opportunities like that, we'll explore it. And today, as you saw at the build yesterday, which I thought was really excellent, Satya announced the next-generation PCs, Copilot+ PC, which runs fantastically on NVIDIA's RTX GPUs that are shipping in laptops. But it also supports ARM beautifully. And so it opens up opportunities for system innovation even for PCs. Operator: Our last question comes from the line of C.J. Muse with Cantor Fitzgerald. Please go ahead.
5,876
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: Our last question comes from the line of C.J. Muse with Cantor Fitzgerald. Please go ahead. C.J. Muse: Good afternoon. Thank you for taking the question. I guess, Jensen, a bit of a longer-term question. I know Blackwell hasn't even launched yet, but obviously, investors are forward-looking and amidst rising potential competition from GPUs and custom ASICs, how are you thinking about NVIDIA's pace of innovation and your million-fold scaling over the last decade, truly impressive. CUDA, Varsity, Precision, Grace, Cohere and Connectivity. When you look forward, what frictions need to be solved in the coming decade? And I guess, maybe more importantly, what are you willing to share with us today?
5,877
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Jensen Huang: Well, I can announce that after Blackwell, there's another chip. And we are on a one-year rhythm. And so and you can also count that -- count on us having new networking technology on a very fast rhythm. We're announcing Spectrum-X for Ethernet. But we're all in on Ethernet, and we have a really exciting road map coming for Ethernet. We have a rich ecosystem of partners. Dell announced that they're taking Spectrum-X to market. We have a rich ecosystem of customers and partners who are going to announce taking our entire AI factory architecture to market. And so for companies that want the ultimate performance, we have InfiniBand computing fabric. InfiniBand is a computing fabric, Ethernet is a network. And InfiniBand, over the years, started out as a computing fabric, became a better and better network. Ethernet is a network and with Spectrum-X, we're going to make it a much better computing fabric. And we're committed -- fully committed to all three links, NVLink computing fabric for single computing domain to InfiniBand computing fabric, to Ethernet networking computing fabric. And so we're going to take all three of them forward at a very fast clip. And so you're going to see new switches coming, new NICs coming, new capability, new software stacks that run on all three of them. New CPUs, new GPUs, new networking NICs, new switches, a mound of chips that are coming. And all of it, the beautiful thing is all of it runs CUDA. And all of it runs our entire software stack. So you invest today on our software stack, without doing anything at all, it's just going to get faster and faster and faster and faster. And if you invest in our architecture today, without doing anything, it will go to more and more clouds and more and more data centers and everything just runs. And so I think the pace of innovation that we're bringing will drive up the capability, on the one hand, and drive down the TCO on the other hand. And so we should be able to scale out with the NVIDIA architecture for this new era of
5,878
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
drive down the TCO on the other hand. And so we should be able to scale out with the NVIDIA architecture for this new era of computing and start this new industrial revolution where we manufacture not just software anymore, but we manufacture artificial intelligence tokens and we're going to do that at scale. Thank you.
5,879
NVDA
1
2,025
2024-05-22 17:00:00
NVIDIA Corporation
32,307
Operator: That will conclude our question-and-answer session and our call for today. We thank you all for joining and you may now disconnect.
5,880
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Operator: Thank you. Good day, everyone, and welcome to Oracle's Fourth Quarter 2024 Earnings Call. Today's call is being recorded, and now I would like to turn the conference over to Ken Bond. Please go ahead. Ken Bond: Thank you, Krista. Good afternoon, everyone, and welcome to Oracle's Fourth Quarter and Fiscal Year 2024 Earnings Conference Call. A copy of the press release and financial tables, which includes a GAAP to non-GAAP reconciliation and other supplemental financial information, can be viewed and downloaded from our Investor Relations website. Additionally, a list of many customers who purchased Oracle Cloud services or went live on Oracle Cloud recently will be available from the Investor Relations website. On the call today are Chairman and Chief Technology Officer, Larry Ellison, and Chief Executive Officer, Safra Catz. As a reminder, today's discussion will include forward-looking statements, including predictions, expectations, estimates, or other information that might be considered forward-looking. Throughout today's discussion, we will present some important factors relating to our business, which may potentially affect these forward-looking statements. These forward-looking statements are also subject to risks and uncertainties that may cause actual results to differ materially from statements being made today. As a result, we caution you against placing undue reliance on these forward-looking statements, and we encourage you to review our most recent reports, including our 10-K and 10-Q, and any applicable amendments for a complete discussion of these factors and other risks that may affect our future results or the market price of our stock. And finally, we are not obligating ourselves to revise our results or these forward-looking statements, in light of new information or future events. Before taking questions, we'll begin with a few prepared remarks. And with that, I'd like to turn the call over to Safra.
5,881
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Safra Catz : Thanks, Ken, and good afternoon, everyone. Clearly, we had an absolutely incredible quarter. As you know, Oracle's Q4 is known for customers purchasing large software license contracts to power their businesses. But because of the pivot to the cloud, this Q4 was powered by the enormous demand for our cloud services. And they showed up in RPO or Remaining Performance Obligations. In Q4, Oracle signed the largest sales contract in our history, led by huge demand for training large language models, as well as record levels of sales for OCI, autonomous, fusion, and net suite. RPO was $98 billion, up $18 billion from Q3, and up 44% year-over-year from $68 billion last year. And we are trading one-time non-recurring license revenue in return for much bigger strategic customer commitments for multi-year cloud revenue, from which we expect to further accelerate our revenue growth rates. This is exactly what we've been targeting, and it bolsters my confidence that our overall revenue, earnings, and cash flow performance, as well as our growth rates, will only get stronger and accelerate. In short, this Q4 marks the full emergence of our high growth cloud businesses. Now, I started talking about this tipping [0.4] (ph) years ago, and you've seen it continue to play out in our results since then. As a reminder, we accelerated our US dollar revenue growth rate from negative one in fiscal year [2020] (ph) to plus eight this past year if you exclude Cerner. In addition, EPS has grown at a 10% compounded annual growth rate over that same period. And both operating cash flow and free cash flow, which of course we report on a trailing 12 month basis, were each declining 10% four years ago. This year, they grew 9% and 39% respectively. Now customer conversations are now absolutely fully focused on our cloud services as the results clearly show. So let me give you just a couple of examples -- a few examples. First, as you saw, OpenAI selected Oracle to run deep learning and AI workloads on Oracle Cloud infrastructure.
5,882
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
a few examples. First, as you saw, OpenAI selected Oracle to run deep learning and AI workloads on Oracle Cloud infrastructure. Like many others, OpenAI chose OCI because it is the world's fastest and most cost effective AI infrastructure. In total, we signed over 30 AI contracts for over $12 billion this quarter, and nearly $17 billion this year. Second, we continue to expand our work helping companies use our cloud applications portfolio to reinvent their businesses. As an example, a very large enterprise tech company signed a contract in Q4 for over $600 million where we will be helping them transform their operations with Fusion to enable them to become more agile, faster growing, and more profitable. May I say in the process, we will replace out many of our competitors product. These cross-pillar cloud deals or suite deals, focus on business process reengineering that incorporate multiple cloud applications that no one else can offer. And I want to point out, by the way that today is day 11 of our new fiscal year and we are once again, announcing our results not only for the quarter but the year and giving guidance, making us faster than any other public company by a launch. We are able to do this because of Fusion applications and that is why companies are choosing Fusion and our wonderful teams are showing them the way. And third, I'm pleased to announce that we've signed another multi-cloud partnership this time with Google. OCI and Google Cloud Network interconnect is available immediately in 10 regions, and we will be live with Oracle database at Google Cloud in September, where customers can get direct access to Oracle Database Services running on OCI, deployed in Google Cloud data centers. So what's driving this? Well, it is all about our comprehensive, highly differentiated and secure cloud offering. Customers have progressed from their initial curiosity about Oracle Cloud into full-blown rollout. We have the most secure, complete and cost-effective set of enterprise applications and infrastructure
5,883
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
into full-blown rollout. We have the most secure, complete and cost-effective set of enterprise applications and infrastructure cloud technologies of any vendor. Not only are our cloud technologies vertically integrated to work together, but we offer flexible deployment models like public cloud, multi-cloud, sovereign clubs, dedicated cloud or any other way our customers ask us to deliver. And we also offer Oracle Alloy, where Oracle partners become cloud providers offering customized cloud services alongside the Oracle Cloud. Now I'm now going to dive into the details of Q4 and finish my prepared remarks with how this strength and momentum will impact fiscal year 2025 and beyond. Okay. So let's start. In Q4, the dollar strengthened from the time of my Q4 guidance, so we saw a 1% currency headwind to total revenue and a $0.01 currency headwind to EPS. As usual, I'll be discussing our financials using constant currency growth rate because this is how we manage the business. Total cloud revenue that is SaaS plus IaaS, excluding Cerner, was $4.7 billion up 23%, including Cerner, total cloud revenue was up 20% at $5.3 billion. And SaaS revenue of $3.3 billion, up 10% and IaaS revenue of $2 billion, up 42% on top of last year's 77% growth. Total cloud services and license support for the quarter was $10.2 billion, up 10%, driven again by our strategic Cloud Applications, Autonomous Database and OCI. Application subscription revenues which includes product support were $4.6 billion and up 6%. Our strategic back-office SaaS applications now have annualized revenue of $7.7 billion and were up 16%. Infrastructure subscription revenues, which includes license support were $5.6 billion up 13%. Infrastructure cloud services revenue was up 42%. Excluding legacy hosting, OCI Gen2 infrastructure cloud services grew 44%, with an annualized revenue of $7.4 billion. OCI consumption revenue was up 53% were it not for continuing supply constraints, consumption growth would have been even higher. Database subscriptions, which
5,884
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
were it not for continuing supply constraints, consumption growth would have been even higher. Database subscriptions, which includes database license support, were up 6% and highlighted by cloud database services, which were up 26% and now have an annualized revenue of $2 billion. Very importantly, as on-premise databases migrate to the cloud, either to OCI directly or using database at Azure or database at Google Cloud. We expect these cloud database services will be that third leg of revenue growth alongside OCI and strategic [SaaS] (ph). Consistent with our strategic direction and reflecting customer preference for cloud services, software license revenues were down 14% and to $1.8 billion. So all in, total revenues for the quarter were $14.3 billion. That's up 4% if you include Cerner, up 5% excluding Cerner. Shifting to margins. The gross margin for cloud services and license support was 77%. This is a result of the mix between support and cloud, in which cloud is growing much faster than support. The gross margin percentages for software support and SaaS are consistent with last year, while IaaS gross margins improved substantially. Gross margins will go higher as more of our cloud regions fill up. We monitor our expenses carefully to ensure gross margin percentages expand as we scale. To that point, though the gross profit dollars of cloud services and license support grew 8% in Q4. Non-GAAP operating income was $6.7 billion, up 9% from last year. The operating margin was 47%, up from 44% last year, as we continue to drive more efficiencies in our business. Looking forward, as we continue to benefit from economies of scale in the cloud, we will not only continue to grow operating income, but we will also expand the operating margin percentages. The non-GAAP tax rate came out over 1% higher than my guidance at 20.1% and non-GAAP EPS was $1.63 and GAAP EPS was $1.11 in USD. As a reminder, the non-GAAP tax rate last year was 9.2%, and this had an adverse effect on this quarter's EPS growth. Non-GAAP pretax
5,885
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
the non-GAAP tax rate last year was 9.2%, and this had an adverse effect on this quarter's EPS growth. Non-GAAP pretax income grew 14% in constant currency. So you can figure out that had we had the same tax rate last year as this year, net income would have grown 14% and EPS would have been up 12% in CD, 11% in USD. For the full fiscal year, total company revenue was $53 billion up 6%. Total cloud services and license support revenue, which is entirely subscription-based and accounts for nearly [3/4] (ph) of total revenue was $39.4 billion, up 11%. Total application subscription revenues grew 9% and infrastructure subscription revenue grew 13%. Total cloud services, excluding Cerner, were up 26% to [$17.2 billion] (ph). SaaS revenue, excluding Cerner was up 13% to $10.4 billion for the year. IaaS and cloud infrastructure revenue was up 50% to $6.8 billion for the year, with consumption revenue up 66% from last year. Non-GAAP EPS for the full year was $5.56 in USD, up 9% in USD and the full year operating margin percentage was 44%, up from 42% last year. At quarter end, we had nearly $10.7 billion in cash and marketable securities, the short-term deferred revenue balance was $9.3 billion, up 4%. Over the last four quarters, operating cash flow was $18.7 billion, up 9% and free cash flow was $11.8 billion, up 39%. Capital expenditures were $6.9 billion. As I mentioned our remaining performance obligations, or RPO is now $98 billion, up 44% in constant currency and the portion excluding Cerner, if you're curious was up 60%. We signed several large deals in this quarter, and we have many more -- many, many more in the pipeline. Approximately 39% of total RPO is expected to be recognized as revenue over the next 12 months. And this reflects the growing trend of customers wanting larger contracts as they see firsthand how Oracle Cloud services are benefiting their businesses. Now while we spent $3.5 billion on CapEx this quarter, the $2.8 billion shown in the cash flow statement is lower simply, as a result of timing
5,886
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
$3.5 billion on CapEx this quarter, the $2.8 billion shown in the cash flow statement is lower simply, as a result of timing of payments. We are working as quickly as we can to get cloud capacity built out given the enormity of our backlog and pipeline. At this moment, we have 76 customer-facing cloud regions live with 47 public cloud regions around the world and another 19 being built. We have 11 database at Azure sites live and more locations with Microsoft coming online soon. We will have 12 Oracle database at Google Cloud sites live this year. We also have 13 dedicated regions live and 15 more planned. We have several national security regions and EU sovereign regions live with increasing demand for more of each. And finally, we already have two alloy cloud regions live with 11 more plant. Of course, we also have many, many, many cloud customer installations as I mentioned earlier, the sizing and flexibility and -- the size and flexibility and deployment optionality of our cloud regions continues to be incredibly advantageous for us in the marketplace. This quarter, we purchased 1.25 million shares for a total of $150 million. In addition, we paid out dividends of $4.4 billion over the last 12 months, and the Board of Directors today declared a dividend of $0.40 per share. Before I discuss my guidance for Q1 and fiscal 2025, I do just want you to have a couple of notes. The first is that in Q4, we decided to exit the advertising business, which had declined to about $300 million in revenue in fiscal year 2024. Also, I will no longer breaking out the Cerner business in my results. And even though it will begin to grow modestly throughout the year in both revenue and operating margin, it's not necessary to break it out anymore and because it is now operating in a growth mode. Now to guidance. Throughout fiscal year 2025, I expect continued strong cloud demand to push Oracle sales and RPO even higher and result in double-digit revenue growth this fiscal year. I also expect that each successive quarter should
5,887
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
RPO even higher and result in double-digit revenue growth this fiscal year. I also expect that each successive quarter should grow faster than the previous quarter as OCI capacity increases to meet demand. We believe our momentum, our current momentum will continue as our pipeline is growing even faster than bookings and our win rates are going higher as well. I expect fiscal year 2025 cloud infrastructure services to grow faster than the 50% we reported this year. CapEx In fiscal year 2025 will probably be double what it is in fiscal year 2024 -- what it was in fiscal year 2024. Okay. Beyond this fiscal year, I remain firmly committed to our fiscal year 2026 financial goals for revenue, operating margins and EPS growth. However, given our strong bookings results, I believe some of these goals might prove to be too conservative given our momentum. We are going to provide you a more fulsome update on all of this at the Financial Analyst Meeting at Oracle Cloud World in Las Vegas in September. Okay. Let me now turn to my guidance for Q1, which I'll review on a non-GAAP basis. Now if currency exchange rates remain the same as they are now, currency should have a negative 1% effect on my revenue and either $0.01 or $0.02 negative on EPS in Q1. However, as you all know, actual currency impact may be more or less, I just can't get that now. Total revenue for Q1 are expected to grow from 6% to 8% in constant currency and using the currency situation as it is now, they're expected to grow from 5% to 7% in USD. Total cloud revenue is expected to grow from 21% to 23% in constant currency and 20% to 22% in USD. Non-GAAP EPS is expected to grow between 11% to 15% and be between $1.33 and $1.37 in constant currency. Non-GAAP EPS is expected to grow between 10% to 14% and be between $1.31 and $1.35, but this time in USD. My EPS guidance for Q1 assumes a base tax rate of 20%. And as always, one-time tax events could cause the actual tax rates to vary from my guidance. Okay. I know that was long. But with that, let me turn it
5,888
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
tax events could cause the actual tax rates to vary from my guidance. Okay. I know that was long. But with that, let me turn it to Larry for his comments.
5,889
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Larry Ellison: Thank you, Safra. I'm going to start by repeating something Safra said. In Q4, Oracle's company-wide RPO increased 44% to $98 billion. In AI alone, we signed contracts with 30 different customers for $12.5 billion in new AI business. These astonishing RPO numbers 44% and $98 billion were driven by massive increases in sales of Oracle Cloud Infrastructure, OCI. So who are the companies choosing to use Oracle Cloud Services and Oracle data centers. Well, here are a few names: NVIDIA, Microsoft, Google, X AI, Open AI, coherent dozens more. In other words, the world's largest cloud companies and the world's most successful and accomplished AI companies choose to use Oracle cloud services and data centers. So can -- so why are they working with Oracle, because Oracle's Gen 2 cloud infrastructure is different. OCI's area network moves data much faster. And when you charge by the minute, faster also means less expensive. OCI trains large language models several times faster and at a fraction of the cost of other clouds. OCI’s Critical cloud software, the operating system and the database are fully Autonomous. At OCI, human beings do not run the operating system or the database, Autonomous software robots do. No one else has this level of autonomy in the cloud. Eliminating human labor eliminate human error. Almost all cloud security breaches begin with human error, eliminating the possibility of human error is the only way to make certain your cloud data is not stolen. That's it. The most important technology companies in the world are using OCI because it's faster, less expensive and more secure. Easy to say, not easy to do. Back to you, Ken. Ken Bond: Thank you, Larry. Chris, if you could please poll the audience for questions, we'll begin the Q&A portion of the call. Operator: Thank you. Our first question comes from Raimo Lenschow with Barclays. Please go ahead.
5,890
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Operator: Thank you. Our first question comes from Raimo Lenschow with Barclays. Please go ahead. Raimo Lenschow: Perfect, thank you. Congrats from me. These are very impressive numbers. Safra, can you try to help us bridge the strong RPO number and how we need to think about feeding that into revenue? Is that just the capacity function? Or is there anything on the customer side that you need to deliver on the technology side you need to deliver. Just help us to bridge the gap on those. Thank you. Safra Catz: It's all about capacity. It is -- as we bring the capacity online wherever it's going online around the world is when those workloads are coming over. A lot of the engineering work is done in advance so that those customers know how they can operate. They bring smaller workloads, but the bigger workloads, they are just waiting for us to go online and make it available to them. It is really that level. We are scheduling them on our availability. And as I mentioned, our pipeline to take more deals is all about us just getting the capacity up and live and moving forward. Raimo Lenschow: So it's just a mechanical problem in a way. Safra Catz: Yes. Well, it's not a problem. It's just the schedule. As things come online, as the data centers go live or as we deliver the computers, they are just getting -- it's just very straightforward. There's no magic here. These customers have done a lot of the analysis in the engineering in advance and have tested us or competed us against our competitors and have chosen us very -- already understanding how we work, and they're just waiting for us to give them more capacity. Raimo Lenschow: Great. Very impressive. Thank you. Operator: Your next question comes from Brad Zelnick with Deutsche Bank. Please go ahead.
5,891
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Operator: Your next question comes from Brad Zelnick with Deutsche Bank. Please go ahead. Brad Zelnick: Great. Thank you very much and congrats from me as well. Larry, it's great to see the amazing momentum in OCI, especially given it's a competitive market and the leading names in AI are coming to you, wanting to partner with Oracle. Can you talk about the innovation road map for OCI and your AI services in particular? And why we should expect Oracle to keep on winning not just today, but over the next several years to come in this market?
5,892
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Larry Ellison: Okay. Well, I think in OCI, we've talked for a while about our ability to build very small data centers, one you could put in a shipper, a submarine or a full cloud, a full Oracle cloud, we will soon have in six standard half racks to go into a conventional data center. So virtually any one of our customers could choose to have the full Oracle Cloud in their data center with every service, every service in the cloud. And they could scale that up quite extraordinarily large. So we talk about the fact that we can start very small and that's a huge difference between us and our competitors. So we can actually put it again customer by customer, small countries, we can do. What we haven't talked so much about is we're also building the largest data centers in the world. We talked about -- I think we talked briefly about one last call, where we can park -- it's a 70-megawatt data center where we can park eight 747s nose to tail in the data center, the huge AI training data center. While we're also building a 200-megawatt data center. In fact, this past quarter, we sold about half of that data center for the -- for a period of time. So we're now bringing 200 megawatt data centers online. So we are literally building the smallest, most portable, most affordable cloud data centers, all the way up to 200 megawatt data centers ideal for training very large language models and keeping them up to-date. This AI race is going to go on for a long time. It's not a matter of getting ahead, just simply getting ahead in AI, but you also have to keep your model current. And that's going to take larger and larger data centers. And some of the data centers we have that we're planning are actually even bigger. There -- some are getting very close to our there, say a 1 gigawatt, which is a pretty good-sized city or one enormous AI cloud training data center. No one else can span this range. And in every case, we have unbelievably fast networks that are part of this, the data centers we are building include the power
5,893
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
And in every case, we have unbelievably fast networks that are part of this, the data centers we are building include the power plants and the transmission of the power directly into the data center and liquid cooling. And because these modern data centers are moving from air cooled to liquid cooled, and you have to engineer them from scratch. And that's what we've been doing for some time. And that's what we'll continue to do. And currently, we are leading the pack and being able to deliver that quality and that scale of data center.
5,894
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Brad Zelnick: Amazing, thank you so much Larry. Operator: Next question comes from Siti Panigrahi with Mizuho. Please go ahead. Siti Panigrahi: Thank you. Larry and Safra, it's impressive to see how fast you ramped OCI as you're now available in 11 data centers. And then now with this Google partners, we'll have Oracle database at Google Cloud. So I have two questions: one is, as you embark on offering this multi-cloud flexibility to customer, when can we see a similar partnership with AWS? And second is, how should we think about these partners is helping your customers migrate their on-prem Oracle workloads to cloud? Safra Catz: I don't know, Larry you wanted – started with that. Larry Ellison: I guess I can start. Well, we believe in giving customers choice. And customers want choice. Customers are using multiple clouds, not only infrastructure cloud, but they might have sales force applications or Workday applications – or they use multiple cloud in their business right now. So it's very important, we think that these -- that all the clouds become interconnected. So we're thrilled to have the connection with Microsoft and be building OCI data centers inside of it -- right inside of Azure. So the computers are next to each other to minimize network costs and network latency, which is all good things. We're doing the same thing with Google. We would love to do the same thing with AWS. We think we should be interconnected to everybody. And that's what we're attempting to do in our multi-cloud strategy. I think, that's what customers want. So I'm optimistic that's the way the world will settle out. We'll get rid of these fees for moving data from cloud-to-cloud and all the clouds will be interconnected, and customers can pick their favorite service from their favorite cloud and mix and match whatever they want to use and do it easily and seamlessly. Siti Panigrahi: Thank you. Operator: Your next question comes from the line of Alex Zukin with Wolfe Research. Please go ahead.
5,895
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Siti Panigrahi: Thank you. Operator: Your next question comes from the line of Alex Zukin with Wolfe Research. Please go ahead. Alex Zukin: Hi guys. Thanks for taking the question. I wanted to dive a bit deeper on just precisely how many deployment models you guys are offering for OCI because it feels as though that is getting particularly differentiated as we start to think of sovereign cloud, GovCloud, more private cloud, given the conservative posture for AI and data privacy. So how do we think about how much of an advantage that is providing in sales cycles? And maybe in that massive $30-plus billion in the second half RPO, but also just comment on the magnitude of that opportunity going forward.
5,896
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Larry Ellison: I'm going to take a swing at this one. We can -- every medium-sized on-premise customer that Oracle has could have a private -- full Oracle cloud where they have no neighbors. They are the only user of that Oracle Cloud, and we can install that in their existing data centers. Nobody else can do that. You have to move to the public cloud. Now we have public cloud, we have a lot of public cloud regions. We love the public cloud. But if you're very conservative and you want to absolutely maximize security and that's important to you. We can put in a cloud, a full Oracle Cloud, and we run it. We pay for the heart -- again, it's an Oracle region. We put in Oracle cloud region and let me just make up a name. Samsung, we could build a cloud region for Samsung. In fact, two cloud regions is for Samsung. We could do two cloud regions making up names, General Motors, Ford, any company. Those are pretty big companies, but much smaller companies as well. So we're the only ones that give you an option to have the full capability of a public cloud run by Oracle, all of our services, every single 1 of our services, you don't pay for the hardware, you just pay for what you use, put that model directly on your premises. And you can use it and no one else is in that cloud. We can do that. No one else can do it. We can put them on ships and on submarines, no one else can do it because we can start very, very small. All Oracle clouds are identical, except for scale. All Oracle clouds have all Oracle services. All Oracle clouds are fully automated because they're identical. They're fully automated. So one of the reasons we took a little bit longer to get our cloud out was because we built something quite different than what our competitors have. And that allows us to go from very small to very large, using the same automation software. I think some of our competitors, they're large data centers, some are quite different than other data centers. They might have different -- some services might be available on some data
5,897
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
some are quite different than other data centers. They might have different -- some services might be available on some data centers and not in others. They're not -- they did a very different approach to what we did. We had the advantage of seeing what all the other guys did and we took a different road. It took us a bit longer, but we think we're better off in terms of security. We're better off in terms of scalability. By the way, that means the ability to go down in size and up in size. It allows us to get to every corner of the globe, and provide a level of privacy for your data that other cloud providers cannot provide.
5,898
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Safra Catz: Yes. And because as Larry said, because whatever the deployment model is you don't have to compromise. Some of our competitors may offer some level of sovereignty or some level of disconnected, but they don't actually have all the services for us, and the reason we've been so successful is whether it's disconnected or sovereign or whatever it is, the customer always gets everything. All services, not just some services and they get to deploy it any way they want, and they get the security or the regulatory requirements, sovereignty may be very critical. And for most governments, they don't want their data in the public cloud out and about. They want to have its sovereign to their country. And so no compromises, no compromises on the services and no compromises on security. Alex Zukin: It also sounds like you guys have a better price in most cases. So thanks, again to [indiscernible] tough quarter. Safra Catz: Much, because we are so much faster when you use our cloud, it is new. It's modern, but it also is technical advantages and so it runs your workload so much more quickly. And when you pay by the minute, the second, the hour, if your workload ends in [1/10] (ph) time, you pay a 1/10th the price. That's very hard to compete with.
5,899
ORCL
4
2,024
2024-06-11 17:00:00
Oracle Corporation
22,247
Larry Ellison: One last comment -- maybe one last comment. The other thing is our cloud was designed not for hundreds of regions, but for thousands or possibly even tens of thousands of data centers and regions. That's why we had to put in a high degree of automation. There is no way we could run these data centers manually. There are too many of them, and we're building them to do fast. We couldn't hire people fast enough and train people fast enough. And the risk of them making a mistake, an error is the risk -- well, they start exposing our customers' data. So they are highly automated. It's a little bit like I apply to myself, and comparing it to the satellites that Elon Musk puts in the sky. StarLink has -- they're more -- he has more satellites than everyone else in the world combined because, again it is a very different – it is a satellite system, Starlink, that's designed for a very large number of satellites that are highly automated. And same model lots and lots of them, 100% or nearly 100% automation to run these clouds. Operator: Your next question comes from Kirk Materne with Evercore ISI. Please go ahead. Kirk Materne: Yeah. Thanks very much. I'll echo the congrats on the cloud momentum. Larry, Safra, I was wondering if you could just expand a bit on the OpenAI announcement this afternoon. Just what that entails in terms of how you'll be working with them or Microsoft? Are there certain workloads they'll be working on with you directly. Can you just give us whatever additional color you can on that deal, obviously very excited. Thanks. Safra Catz: Well. Go ahead. No, you can. Go ahead.