File size: 61,129 Bytes
f9da573
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92ae0d0
f9da573
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
92ae0d0
f9da573
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382


Thomson Reuters StreetEvents Event Transcript
E D I T E D   V E R S I O N

Q2 2020 NVIDIA Corp Earnings Call
AUGUST 15, 2019 / 9:30PM GMT

================================================================================
Corporate Participants
================================================================================

 * Colette M. Kress
   NVIDIA Corporation - Executive VP & CFO
 * Jen-Hsun Huang
   NVIDIA Corporation - Co-Founder, CEO, President & Director
 * Simona Jankowski
   NVIDIA Corporation - VP of IR

================================================================================
Conference Call Participiants
================================================================================

 * Toshiya Hari
   Goldman Sachs Group Inc., Research Division - MD
 * Vivek Arya
   BofA Merrill Lynch, Research Division - Director
 * Aaron Christopher Rakers
   Wells Fargo Securities, LLC, Research Division - MD of IT Hardware & Networking Equipment and Senior Analyst
 * Joseph Lawrence Moore
   Morgan Stanley, Research Division - Executive Director
 * Stacy Aaron Rasgon
   Sanford C. Bernstein & Co., LLC., Research Division - Senior Analyst
 * Timothy Michael Arcuri
   UBS Investment Bank, Research Division - MD and Head of Semiconductors & Semiconductor Equipment
 * Harlan Sur
   JP Morgan Chase & Co, Research Division - Senior Analyst
 * Christopher James Muse
   Evercore ISI Institutional Equities, Research Division - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst
 * Matthew D. Ramsay
   Cowen and Company, LLC, Research Division - MD & Senior Technology Analyst

================================================================================
Presentation
================================================================================
--------------------------------------------------------------------------------
Operator    [1]
--------------------------------------------------------------------------------

          Good afternoon. My name is Christina, and I will be your conference operator today. Welcome to NVIDIA's financial results conference call. (Operator Instructions)
I'll now turn the call over to Simona Jankowski from Investor Relations to begin your conference.

--------------------------------------------------------------------------------
Simona Jankowski,  NVIDIA Corporation - VP of IR    [2]
--------------------------------------------------------------------------------

          Thank you. Good afternoon, everyone, and welcome to NVIDIA's Conference Call for the Second Quarter of Fiscal 2020. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2020. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 15, 2019, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
With that, let me turn the call over to Colette.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [3]
--------------------------------------------------------------------------------

          Thanks, Simona. Q2 revenue was $2.58 billion, in line with our outlook, down 17% year-on-year and up 16% sequentially.
Starting with our gaming business. Revenue of $1.31 billion was down 27% year-on-year and up 24% sequentially. We are pleased with the strong sequential growth in the quarter when we launched our RTX SUPER lineup for desktop gamers, wrapped up our greatest ever number of gaming laptops and launched our new RTX studio laptops for creators. In July, we unveiled 3 GeForce RTX SUPER GPUs, delivering the best-in-class gaming performance and power efficiency and real-time ray tracing for both current and next-generation games. These GPUs delivered a performance boost of up to 24% from our initial Turing GPUs launched a year earlier. The SUPER lineup strengthens our leadership in the high end of the market, and the response has been great. We look forward to delighting gamers with the best performance in ray tracing as we get into the back to school and holiday shopping seasons.
Ray tracing is taking the gaming industry by storm and have quickly come to define the modern era of computer graphics. A growing number of blockbuster AAA titles have announced support for NVIDIA RTX ray tracing, including Call of Duty: Modern Warfare, Super Punk 2077 (sic) [Cyberpunk 2077], Watch Dogs: Legion and Wolfenstein: Youngblood. Excitement around these titles is tremendous. GameSpot called Cyberpunk one of the most anticipated games of the decade. NVIDIA GeForce RTX are the only graphic cards in the market with hardware support for ray tracing. They deliver a 2 to 3x performance speed up over GPUs without a dedicated ray tracing core.
The laptop business continues to be a standout growth driver as OEMs are ramping a record 100-plus gaming laptop models ahead of the back to school and holiday season. The combination of our energy-efficient Turing architecture and Max-Q technology enables beautifully crafted thin and light form factors that can deliver the performance of high-end gaming desktop or our next-generation console.
At Computex in May, we unveiled NVIDIA RTX Studio laptops, a new design artist platform that extends our reach to the large, underserved market of creators. In the age of YouTube, creators and freelancers are rapidly growing population, but they have traditionally not had access to professional-grade workstations through online and retail channels. RTX Studio laptops are designed to meet their increasing complex workflows such as photorealistic ray tracing, AI image enhancement and ultra high-resolution video. Powered by our RTX GPUs and optimized software, RTX Studio laptops deliver performance that's up to 7x faster than that of the MacBook Pro. A total of 27 RTX Studio models have been announced by major OEMs.
Sequential growth also benefited from the production ramp of the 2 new models of Nintendo Switch gaming console. We are expecting our console business to remain strong in Q3 before the seasonal production slowdown in Q4 when console-related revenue is expected to be fairly minimal, similar to last year.
Moving to data center. Revenue was $655 million, down 14% year-on-year and up 3% sequentially. In the vertical industries portion of the business, expanding AI workload drove sequential and year-over-year growth. In hyperscale portion, we continue to be impacted by relatively weak overall spending at a handful of CPU -- CSPs. Sales of NVIDIA GPUs for use in the cloud were solid. While sales of internal hyperscale use were muted, the engineering focus on AI is growing.
Let me give some color on each of these areas. We are building a broad base of customers across multiple industries as they adopt NVIDIA's platforms to harness the power of AI. Public sectors, higher education and financial services were among the key verticals driving growth this quarter. In addition, we won Lighthouse account deals in important industries that are on the cusp of being transformed by AI. For example, in retail, Walmart is using NVIDIA GPUs to run some of its product demand forecasting models, slashing the time to do so in just 4 hours from several weeks on CPUs. By accelerating its data science workflow, Walmart can improve its algorithms, reduce development cycles and test new features.
Earlier this week, we announced breakthroughs for the fastest training and inference of the state-of-the-art model for natural language process understanding called BERT, or Bidirectional Encoder Representations of -- from Transformers, a breakthrough AI language model that achieves a deeper sense of language, context and meaning. This can enable mere human comprehension in real-time by chat box, intelligent personal assistants and search engines. We are working with Microsoft as an early adopter of these advances.
AI computing leadership is a high priority for NVIDIA. Last month, we set records for training deep learning neural network models on the latest MLPerf benchmarks, particularly in the most demanding areas. In just 7 months, we have achieved up to 80% speed-ups enabled by new algorithms and software optimizations across the full stack while using the same hardware. This is a direct result of the productive programming environment and flexibility of CUDA.
Delivering AI at scale isn't just about silicon. It's about optimizing across the entire high-performance computing system. In fact, the NVIDIA AI platform is getting progressively faster. Every month, we publish new optimization and performance improvements to CUDA-X AI libraries, supporting every AI framework and development environment. All in, our ecosystem of developers is now 1.4 million strong.
In setting these MLPerf records, we leveraged our new DGX SuperPOD AI supercomputer, demonstrating that leadership in AI research demands leadership in computing infrastructure. This system debuted in June at #22 on the TOP500 list of the world's fastest supercomputers at the annual International Supercomputing Conference. Used to meet the massive demand for autonomous vehicle development program, it is powered by more than 1,500 NVIDIA V100 Tensor Core GPUs linked with Mellanox interconnects. We've made DGX SuperPOD available commercially to customers, essentially providing them with the turnkey supercomputer that they can assemble in weeks rather than months. It is roughly 400x smaller in size than other similarly performing TOP500 systems, which are built from thousands of servers.
Also at the conference, we announced that by next year's end, we will make available to the ARM ecosystem NVIDIA's full stack of AI and HPC software, which accelerates more than 600 HPC applications and all AI frameworks. With this announcement, NVIDIA will accelerate all major CPU architectures, including x86, POWER and ARM.
Lastly, regarding our pending acquisition of Mellanox, we have received regulatory approval in the U.S. and are engaged with regulators in Europe and China. The approval process is progressing as expected, and we continue to work toward closing the deal by the end of this calendar year.
Moving to pro visualization. Revenue reached $291 million, up 4% from our prior year and up 9% sequentially. Year-on-year and sequential growth was led by record revenue for mobile workstations with strong demand for new thin and light form factors. We had a great showing at SIGGRAPH, the computer graphics industry's biggest annual conference held in Los Angeles. Our researchers won several Best in Show awards. In just a year since the launch of RTX ray tracing, over 40 design and creative applications with RTX technology had been announced by leading software vendors, including Adobe, Autodesk and Dassault systems and many others. NVIDIA RTX technology has reinvigorated the computer graphics industry by enabling researchers and developers to take a leap in photorealistic rendering, augmented reality and virtual reality.
Finally, turning to automotive. Q2 revenue was $209 million, up 30% from a year ago and up 26% sequentially. This reflects growing adoption of next-generation AI cockpit solutions and autonomous vehicle development projects, including 1 particularly sizable development services transaction that was recognized in the quarter. In addition, in June, we announced a new partnership with the Volvo Group to develop AI and autonomous trucks utilizing NVIDIA's end-to-end AI platform for training, simulation and in-vehicle computing. The strategic partnership will enable Volvo Group to develop a wide range of autonomous driving solutions for freight transport, recycling collection, public transport, construction, mining, forestry and more. This collaboration is a great validation of our long-held position that every vehicle, not just cars but also trucks, shuttles, business, taxis and many others, will have autonomous capability 1 day.
Autonomous features can bring enormous value to the trucking industry, in particular as the demand of online shopping put ever greater stress on the world's transport systems. Expectations for overnight or same-day deliveries create challenges that can only be met by autonomous trucks, which can operate 24 hours a day. To help address these needs, NVIDIA has created an end-to-end platform for autonomous vehicles from AI computing infrastructure to large-scale simulation to in-car computing. Multiple customers from OEMs like Mercedes-Benz, Toyota and Volvo to Tier 1s like Bosch, Continental and ZF are already onboard. We see this as a $30 billion addressable market by 2025.
Moving to the rest of the P&L. Q2 GAAP gross margins was 59.8% and non-GAAP was 60.1%, up sequentially, reflecting higher automotive development services, a favorable mix in gaming and lower component cost. GAAP operating expenses were $970 million, and non-GAAP operating expenses were $749 million, up 19% and 8% year-on-year, respectively. We remain on track for high single-digit OpEx growth in fiscal 2020 while continuing to invest in the key platforms driving our long-term growth, namely graphics, AI and self-driving cars. GAAP EPS was $0.90, down 49% from a year earlier. Non-GAAP EPS was $1.24, down 36% from a year ago.
With that, let me turn to the outlook for the third quarter of fiscal 2020. We expect revenue to be $2.9 billion, plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 62% and 62.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $980 million and $765 million, respectively. GAAP and non-GAAP OI&E are both expected to be income of approximately $25 million. GAAP and non-GAAP tax rates are both expected to be 10%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $100 million to $120 million. Further financial details are included in the CFO commentary and other information available on our IR website.
In closing, let me highlight upcoming events for the financial community. We will be at the Jefferies conference, hardware and communications infrastructure summit, on August 27 and at the Citi Global Technology Conference on September 25.
With that, we will now open the call for questions. Operator, would you please poll for the questions?


================================================================================
Questions and Answers
================================================================================
--------------------------------------------------------------------------------
Operator    [1]
--------------------------------------------------------------------------------

          (Operator Instructions) And your first question comes from the line of C.J. Muse with Evercore.

--------------------------------------------------------------------------------
Christopher James Muse,  Evercore ISI Institutional Equities, Research Division - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst    [2]
--------------------------------------------------------------------------------

          I guess first question on gaming, how should we think about your outlook into the October quarter vis-à-vis kind of normal seasonality? How are you thinking about Switch within that? And considering now that you have full Turing lineup as well as content truly coming to the forefront here, how do you think about trends beyond the October quarter?

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [3]
--------------------------------------------------------------------------------

          Sure. Colette, why don't you take the Switch question? And then I'll take the rest of the RTX questions.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [4]
--------------------------------------------------------------------------------

          Sure. From a gaming perspective, the overall Switch or the overall console business definitely is a seasonal business. We usually expect to see production ramping in Q2 and in Q3, with it coming down likely in Q4. So you should see Switch to be a portion definitely of our gaming business in Q3.

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [5]
--------------------------------------------------------------------------------

          Yes. C.J., thanks for the question. RTX, as you know, is -- first of all, RTX is doing great. I think we've put all the pieces in place to bring ray tracing into the future of games. The number of games, the blockbuster games that adopted RTX is really snowballing. We announced several -- 6 games in the last couple of months. There's going to be some exciting announcements next week at gamescom. It's pretty clear now the future of gaming will include ray tracing. The number of software developers that create -- with creative tools that adopted RTX is really quite spectacular. We now have 40 -- over 40 ISV tools that was announced at SIGGRAPH that have accelerated ray tracing and video editing. And some of the applications' amazing AI capabilities for image optimization enhancement support RTX. And so looking forward, this is what I expect. I expect that ray tracing is going to drive a reinvigoration of gaming graphics. I expect that the over 100 laptops that we have RTX designed -- RTX GPUs designed into is going to contribute our growth. Notebook gaming is  one of the fastest-growing segments of the gaming platform world. The number of notebooks that are able to game is only a few percent, so it's extremely underexposed. And yet, we know that gamers are -- like the rest of us, they like thin and light notebooks, but they like it to be able to run powerful games. And so this is an area that has grown significantly for us year-over-year, and we're expecting it to grow through the end of the -- through the second half and through next year.
And one of the things that's really exciting is our RTX Studio line that we introduced recently. We observed, and through our discussions with the PC industry, that the creatives are really underexposed and underserved by the latest technologies. And they want notebooks and they want PCs that have powerful graphics. They use it for 3D content creation and high definition video editing and image optimization and things like that. And we introduced a brand-new line of computers that we call RTX Studio. Now the OEMs were so excited about it. And at SIGGRAPH, we now have 27 different laptops shipping and more coming. And so I think RTX is really geared for growth. We have great games coming. We got the SUPER line of GPUs. We have all of our notebooks that were designed into that we're ramping and, of course, the new RTX Studio line. And so I expect this to be a growth market for us.

--------------------------------------------------------------------------------
Christopher James Muse,  Evercore ISI Institutional Equities, Research Division - Senior MD, Head of Global Semiconductor Research & Senior Equity Research Analyst    [6]
--------------------------------------------------------------------------------

          Very helpful. If I could follow up on the data center side, perhaps you can speak directly just to the hyperscale side, both internal and cloud, and whether you're seeing any green shoots, any signs of life there and how you're thinking about what that rate of recovery could look like over time.

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [7]
--------------------------------------------------------------------------------

          With the exception of a couple of hyperscalers, C.J., I would -- we're seeing broad-based growth in data centers. In the area of training, the thing that's really exciting everybody, and everybody is racing towards, is training these large gigantic natural language understanding models, language models. The transformer model that was introduced by Google, called BERT, has since been enhanced into XLned and RoBERTa and, gosh, so many different, GP2, and Microsoft's MASS. And there's so many different versions of these language models. And in the AI, NLU, natural language understanding, is one of the most important areas that everybody's racing to go to. And so these models are really, really large. It's over 1,000x larger than image models that we're training just a few years ago, and they're just gigantic models. It's one of the reasons why we built the DGX SuperPOD so that we could train these gigantic models in a reasonable amount of time. The second area -- so that's training in the hyperscalers.
The second area where we're seeing enormous amounts of activity has to do with trying to put these conversational AI models into services so that they could be interactive and in real time. Whereas photo tagging and photo enhancement is something that you could put off-line and you could do that while you have excess capacity when it's off of the most busy time of the day. You can't do that with language and conversational AI. You better to respond to the person in real time. And so the performance that's required is significant. But more importantly, the number of models necessary for conversational AI from speech recognition to language understanding to recommendation systems to text-to-speech to wave synthesis, these 5, 6, 7 models have to be processed in real time -- in series and in real time so that you can have a reasonable conversation with the AI agent.
And so these type of activities is really driving interest and activity at all of the hyperscalers. My expectation is that this is going to continue to be a big growth opportunity for us. But more importantly, in addition to that, we're seeing that AI is -- the wave of AI is going from the cloud to the enterprise to the edge and all the way out to the autonomous systems. The place where we're seeing a lot of excitement, and we talked about that in the past and we're seeing growth there, has to do with the vertical industry enterprises that are starting to adopt AI to create new products, whether it's a delivery robot or some kind of a chat bot or the ability to detect fraud in financial services, these applications in vertical industries are really spreading all over the place. There's some over 4,000 AI start-ups around the world. And the way that we engage them is they use our platform to start developing AI in the cloud. And as you know, we're the only AI platform that's available on-prem and in every single cloud. And so they can use our AI platforms for -- in all the clouds, which is driving our cloud computing, external cloud computing growth. And then they can also use it on-prem if their usage really grows significantly. And that's one of the reasons why our Tesla for OEMs and DGX is growing. And so we're seeing broad-based excitement around AI as they use it for their products and new services. And these 4,000, 4,500 start-ups around the world is really driving consumption of that.

--------------------------------------------------------------------------------
Operator    [8]
--------------------------------------------------------------------------------

          And your next question comes from the line of Vivek Arya with Bank of America Merrill Lynch.

--------------------------------------------------------------------------------
Vivek Arya,  BofA Merrill Lynch, Research Division - Director    [9]
--------------------------------------------------------------------------------

          I actually had 2 as well, one quick one for Colette and one for Jensen. Colette, good to see the gross margin recovery getting into October. Is this 62% to 63% range a more sustainable level and perhaps a level you could grow off of as sales get more normalized levels? And then a bigger question is for Jensen. Again, on the data center side, Jensen, when I look back between -- 2015 to 2018, your data center business essentially grew 10x. And then the last year has been a tough one with the slowdown in cloud CapEx and so forth. When do you think your data center starts to grow back on a year-to-year -- on a year-on-year basis? Can that happen sometime -- later this year? And then just longer term, what is the right way to think about this business? Does it go back to prior levels? Does it go at a different phase? This is the one part of the business that I think is toughest for us to model, so any color would be very helpful.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [10]
--------------------------------------------------------------------------------

          Great. So let me start first with your question, Vivek, regarding gross margins. Yes, thanks for recognizing that we are moving towards our expectations that, over time, we'll continue to see our overall volumes improve. Essentially, our business is normalized. We've reached normalized levels through the last couple of quarters. And this quarter, just very similar to what we will see going forward, is mix is the largest driver, what drives our overall gross margins and our gross margin improvements.

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [11]
--------------------------------------------------------------------------------

          Yes, Vivek, if you look at the last several years, there's no question our data center business has grown a lot. And my expectation is that it's going to grow a lot more, and let me explain to you why. Aside from a couple -- a few of uncontrollable circumstances and the exception of a couple of large customers, the overall trend, the broad-based trend, of our data center business is upward, to the right. And it is growing very nicely. There's a couple of different dynamics that's causing that on first principles to grow. And of course, one of them is as AI is well known now to require accelerated computing, our computing architecture is really ideal for it. AI is not just one network. It's thousands of different types of networks, and these networks are getting more and more complex over time, the amount of data you have to process is enormous. And so like all software programs, you can't predict exactly how the software is going to get programmed. And having a programmable architecture like CUDA and yet optimized for AI like Tensor Cores that we've created is really the ideal architecture. 
We know also that AI is the most powerful technology force of our time. The ability for machines to learn and write software by itself and write software that no humans can write is pretty extraordinary. And the applications of AI, as you guys are watching yourself, are just spreading in every single industry. And so the way we think about AI is in waves, if you will. The first wave of AI is developing the computer architecture, and that was the first part where -- that's when a lot of people discovered who we are, and we emerged into the world of high-performance computing in AI. The second wave is applying the AI for cloud service providers or hyperscalers. They have a large amount of data. They have a lot of consumer applications. Many of them are not life-critical and so, therefore, the application of an early technology -- early-adoption technology was really viable. And so you saw hyperscalers adopt AI. And the thing that's really exciting for us is beyond recommendations, beyond image enhancement, the area where we believe the most important application for AI is likely conversational AI. Most people talking and asking questions and talking to their mobile devices and looking for something or asking for directions. Instead of having a page of -- a list of options, it responds with an answer that is very likely a good one.
The next phase of AI is what we call vertical industry enterprise AI. And this is where companies are using it not just to accelerate the business process internally, but they're using AI to create new products and services. They could be new medical instruments to IoT-based medical instruments to monitor your health. It could be something related to an application that -- used for financial services for forecasting or for fraud detection. It could be some kind of device that delivers pizza to you, delivery bots. And the combination of IoT and artificial intelligence, for the very first time, you actually have the software capabilities to make use of all of these sensors that you're putting all over the world. And that's the next phase of growth. And it affects companies from large industrials, transportation companies, retailers, you name it. Health care companies, you name it. And so that phase of growth of AI is the phase that we're about to enter into.
And then the longer term is an industry that we all know to be extremely large, but it takes time because it's life-critical, and it has to do with transportation. It's a $100 trillion industry. We know it's going to be automated. We know that everything that moves in the future will be autonomous or have autonomous capabilities. And that's just a matter of time before we realize its full potential.
And so the net of it all is that I believe that AI is the single most powerful technology force of our time, and that's why we're all in on it. And we know that acceleration and accelerated computing is the perfect model for that. And it started in the cloud, but it's going to keep moving out into the edge and through data centers and enterprises and hopefully -- well, eventually, all the way out into autonomous devices and machines in the real world. And so this is a big market, and I'm super enthusiastic about it.

--------------------------------------------------------------------------------
Operator    [12]
--------------------------------------------------------------------------------

          And your next question comes from the line of Toshiya Hari with Goldman Sachs.

--------------------------------------------------------------------------------
Toshiya Hari,  Goldman Sachs Group Inc., Research Division - MD    [13]
--------------------------------------------------------------------------------

          I had 2 as well, one for Jensen and the other for Colette. Jensen, you guys called out inference as a significant contributor to growth in data center last quarter. I think you guys talked about it being a double-digit percentage contributor, curious what you saw from inference in the quarter. And more importantly, if you can talk about the outlook, both near term and long term, as it relates to inference, that'll be helpful. And then secondly, for Colette, just want to double click on the gross margin question. The sequential improvement that you're guiding to is a pretty significant number. So I was just hoping if you can kind of break it down for us in terms of overall volume growth mix dynamics, both between segments and within segments and also to the extent DRAM pricing is impacting that, any color on that will be helpful as well.

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [14]
--------------------------------------------------------------------------------

          Yes, Toshiya, I got to tell you, I'm less good at normal pre -- near-term productions than I am good at thinking about long-term dynamics. But let me talk to you about inference. Our inference business is -- remains robust. It's double digits. It's a large part of our business. And -- but more importantly, the 2 dynamics that I think are near term and that's going to drive growth, number one is interactive conversational AI, interactive conversational AI inference.
If you simply ask a chat bot a simple question, where is the closest pizza and you would -- pizza shop, and you would like to have a conversation with this bot, it would have to do speech recognition, it has to understand what it is that you asked about, it has to look it up in a recommender based on the locations you're at, maybe your preferences of styles of pizza and the price ranges that you're interested and how far you're willing to go, to go get it. It has to recommend a pizza shop for you to go to. It has to then translate that from text-to-speech and then into human -- a human understand a voice.
And those models have to happen in just a few -- ideally, a few hundred milliseconds. Currently, it's not that. And it makes it really hard for these services to be deployed quite broadly and used for all kinds of different applications. And so that's the near-term opportunity, it's interactive conversational AI inference. And you could just imagine every single hyperscaler racing to go make this possible because recently, we had some important breakthroughs in machine learning language models. The BERT model that I mentioned earlier is really, really an important development, and it's caused a large number of derivatives that has improved upon it. And so near-term conversational AI inference.
We're also seeing near term the inference at the edge. There are many types of applications where because of the laws of physics reasons, the speed of light reasons or the economics reasons or data sovereignty reasons, it's not possible to stream the data to the cloud and have the inference done at the cloud. You have to do that at the edge. You need the latency to be low, the amount of data that you're streaming is continuous. And so you don't want to be paying for that line rate the whole time, and maybe the data is of great confidentiality or privacy. And so we're seeing a lot of excitement and a lot of development for edge AI. Smart retail, smart warehouses, smart factories, smart cities, smart airports, you just make a list of those kind of things, basically locations where there is a lot of activity, where safety or cost or large amount of materials is passing through, you could just imagine the applications. All of those really want to be edge computing systems and edge inference systems. And so those are near term -- 2 near-term drivers, and I think it's fair to say that both of them are quite large opportunities.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [15]
--------------------------------------------------------------------------------

          So to answer your question regarding gross margin in a little bit more detail, probably our largest area that we expect improvement in terms of our mix is our mix return regarding our overall gaming business. We expect to have a full quarter of our SUPER lineup within the next quarter, including our RTX as well as our notebook becoming a bigger mix as well as it grows. These drivers are one of the largest reasons why we see that growth in our gross margin. We always think about our component cost, our overall cost of manufacturing, so this is always baked in over time, but we'll continue to see improvements on that as well.

--------------------------------------------------------------------------------
Operator    [16]
--------------------------------------------------------------------------------

          And your next question comes from the line of Harlan Sur with JPMorgan.

--------------------------------------------------------------------------------
Harlan Sur,  JP Morgan Chase & Co, Research Division - Senior Analyst    [17]
--------------------------------------------------------------------------------

          Again, your data center business, many of your peers on the compute and storage side are seeing spending recovery by cloud and hyperscalers in the second half of this year after a similar weak first half of the year. You guys saw some growth in Q2 driven primarily by enterprise. It seems like you had some broadening out of the customer spending this quarter. Inferencing continues to see strong momentum. Would you guys expect that this translates into a double-digit percentage sequential growth in data center in Q3 off of the low base in Q2?

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [18]
--------------------------------------------------------------------------------

          Our hyperscale data center with a few customers don't give us very much -- we don't get very much visibility from a handful of customers in hyperscale. However, we're seeing broad-based growth and excitement in data centers. And the way to think about data center, our data center business consists of hyperscale training, internal training, hyperscale inference, cloud computing -- and that's hyperscale, and that cloud is a public cloud. And then we have vertical industry enterprise, what sometimes we call enterprise, vertical industry enterprise, it could be transportation companies, retailers, telcos, vertical industry adoption of AI either to accelerate their business or to develop new products and services.
And then the -- so when you look at our data center from that perspective and these pieces, although we don't see as much -- we don't get as much visibility as we like in a couple of the large customers, the rest of the hyperscalers, we're seeing broad-based growth. And so we're experiencing the enthusiasm and the energy that maybe the others are. And so we'll keep reporting -- updating you guys as we go. We'll see how it goes.

--------------------------------------------------------------------------------
Operator    [19]
--------------------------------------------------------------------------------

          And your next question comes from the line of Timothy Arcuri with UBS.

--------------------------------------------------------------------------------
Timothy Michael Arcuri,  UBS Investment Bank, Research Division - MD and Head of Semiconductors & Semiconductor Equipment    [20]
--------------------------------------------------------------------------------

          I had 2. I guess first for Jensen, Volta's been around now for about 2 years. Do you see signs of demand maybe building up ahead of the new set of nanometer products, whenever that comes out? I guess I'm just wondering whether there's some element of this is more around product cadence that gets resolved as you do roll out the product. That's the first question.
And then I guess, the second question, Colette, is of the $300 million growth into October, it sounds like Switch is pretty flat, but I'm wondering if you can give us maybe some qualitative sense of where the growth is coming from, is it maybe like 2/3 gaming and 1/3 data centers, something like that?

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [21]
--------------------------------------------------------------------------------

          Well, Volta -- data center products can churn that fast. We -- gamers could churn products quickly because they're bought and sold one at a time. But data centers -- data center infrastructure really has to be planned properly, and the build-out takes time. And we expect Volta to be successful all the way through next year. And software still continues to be improved on it. We're still improving systems on it. And in fact, just 1 year -- in just 1 year, we improved our AI performance on Volta by almost 2x, 80%. And so you could just imagine the amount of software that's built on top of Volta and all the Tensor Cores and all the GPUs connected with NVLink and the large number of nodes that are connected to build supercomputers.
The software of building these systems, large-scale systems, is really, really hard. And that's one of the reasons why you hear people talk about chips, but they never show up because building the software is just an enormous undertaking. The number of software engineers we have in the company is in the thousands, and we have the benefit of having built on top of this architecture for over 1.5 decades. And so when we're able to deploy into data centers as quickly as we do, I think we kind of lose sight of how hard it is to do that in the first place. The last time a new processor entered into a data center was an x86 Xeon, and you just don't bring processors in the data centers that frequently or that easily. And so I think the way to think about Volta is that it's surely in its prime, and it's going to keep -- continue to do well all the way through next year.

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [22]
--------------------------------------------------------------------------------

          In regard to our guidance on revenue, and we do guide in terms of the total. You have seen, in this last quarter, we executed a sequential increase really focusing on moving to a normalization of our gaming business. And we're now approaching the second half of the year getting ready for the back to school and the holidays. So you should expect also our gaming business to continue to grow to reach that full normalization by the end of Q3. We do expect the rest of our platforms to likely also grow. We have a couple different models on how that will come out. But yes, we do expect our data center business to grow, and then we'll see on the rest of our businesses as well.

--------------------------------------------------------------------------------
Operator    [23]
--------------------------------------------------------------------------------

          Your next question comes from the line of Matt Ramsay with Cowen.

--------------------------------------------------------------------------------
Matthew D. Ramsay,  Cowen and Company, LLC, Research Division - MD & Senior Technology Analyst    [24]
--------------------------------------------------------------------------------

          A couple of questions. I guess the first one is, Jensen, if you have any, I guess, high-level qualitative commentary on how the new SUPER upgrades of your Turing platform have been received in the market and how you might think about them progressing through the year. And then, I guess, the second question is a bigger one. Intel's talked quite openly about One API. The software stack at Xilinx is progressing with Versal ACAP. I mean you guys get a lot of credit for the decade of work that you've done on CUDA. But I wonder if you might comment on if you've seen any movement in the competitive landscape on the software side for the data center space.

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [25]
--------------------------------------------------------------------------------

          SUPER is off to a great start. Goodness, SUPER is off to a super start. And if you look at -- if you do channel checks all over, even though we've got a lot of products in the channel and we -- last quarter was a transitional quarter for us actually. And we didn't -- we shipped SUPER later in the quarter. But because the entire ecosystem and all of our execution engines are so primed, we were able to ship a fair number through the channel. And so -- and yet, if you do spot checks all around the world, they're sold out almost everywhere. And the pricing in the spot market is drifting higher than MSRP. That just tells you something about demand. And so that's really exciting. SUPER is off to a super start for -- and at this point, it's a foregone conclusion that we're going to buy a new graphics card, and it's going to the last 2, 3, 4 years to not have ray tracing is just crazy. Ray tracing content just keeps coming out. And between the performance of SUPER and the fact that it has ray tracing hardware, it's going to be super well positioned for -- throughout all of next year.
Your question about APIs and software programmability. APIs is just one of the issues. The large issue about processors is how do you program it. The reason why x86s and CPUs are so popular is because they solve the great challenge of software developers: how to program a computer. And how to program a computer and how to compile for that computer is a paramount concern to computer science, and it's an area of tremendous research. Going from single CPU to multi-core CPUs was a great challenge. Going from multi-core CPUs to multi-node multi-core CPUs is an enormous challenge. And yet, when we created CUDA in our GPUs, we went from 1 CPU core or one processor core to a few to now, in the case of large-scale systems, millions of processor cores. And how do you program such a computer across multi-GPU, multi-node? It's a concept that's not easy to grasp. And so I don't really know how one programming approach or a simple API is going to make 7 different type of weird things work together. And I can't make it fit in my head. But programming isn't as simple as a PowerPoint slide, I guess. And I think it's just -- time will tell whether one programming approach could fit 7 different types of processors when no time in history has it ever happened.

--------------------------------------------------------------------------------
Operator    [26]
--------------------------------------------------------------------------------

          Your next question comes from the line of Joe Moore with Morgan Stanley.

--------------------------------------------------------------------------------
Joseph Lawrence Moore,  Morgan Stanley, Research Division - Executive Director    [27]
--------------------------------------------------------------------------------

          I wonder if you could talk about the strength in the automotive business. Looks like the services piece of that is getting to be bigger. What's the outlook for that part of the business? And can you give us a sense of the mix between services and components at this point?

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [28]
--------------------------------------------------------------------------------

          Sure. Thanks, Joe. Our approach to autonomous vehicles comes in basically 2 parts. The first part is a full stack, which is building the architected processor, the system, the system software and all of the driving applications on top, including the deep neural nets. The second part of it, we call that a full stack self-driving car computer. The second part of DRIVE includes an end-to-end AV development system. For those who would like to use our processors, use our system software but create their own applications, we created a system that allows -- basically shares with them our computing infrastructure that we built for ourselves that allows them to do end-to-end development from deep learning development to the application of AV to simulating that application to doing regression testing of that application before they deploy it into a car. And the 2 systems that we use there is called DGX for training and Constellation for simulation and what is called Replay.
And then the third part of our business model is development agreements, otherwise known as NRE. These 3 elements, full stack computer, end-to-end development flow and NRE project development -- product development consists of the overall DRIVE business. And so although the cars will take several years to go into production, we're seeing a lot of interest in working with us to develop self-driving cars using our development systems and entering into development projects. And so we're -- the number of autonomous vehicle projects is quite large around the world as you can imagine. And so my sense is that we're going to continue to do well here. The additional part of autonomous vehicles and where the capability has been derived and is going to seal up more near-term opportunities has to do with things like delivery shuttles, self-driving shuttles and maybe cargo movers inside walled warehouses. Those kind of autonomous machines require basically the same technology, but it's sooner and easier to deploy. And so we are seeing a lot of excitement around that area.

--------------------------------------------------------------------------------
Operator    [29]
--------------------------------------------------------------------------------

          Your next question comes from the line of Aaron Rakers with Wells Fargo.

--------------------------------------------------------------------------------
Aaron Christopher Rakers,  Wells Fargo Securities, LLC, Research Division - MD of IT Hardware & Networking Equipment and Senior Analyst    [30]
--------------------------------------------------------------------------------

          Congratulations on the improved performance. At your Analyst Day back a couple of months ago, you had highlighted the installed base opportunity for RTX. And I think at that point in time, you talked about 50% being Pascal base, 48% being pre-Pascal. You also alluded to the fact that you were seeing a positive mix shift higher in terms of the price points of this RTX cycle. So I'm curious, where do we stand on the current product cycle? And what are you seeing currently as we go through this product cycle on the Turing platforms?

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [31]
--------------------------------------------------------------------------------

          We launched -- well, first of all, the answer is that RTX adoption is faster than Pascal's adoption if you normalize to time 0 of launch. The reason for that is Pascal launched top to bottom on the same day. And as you guys know, we weren't able to do that for Turing. But if we did that for Turing, the adoption rate is actually faster. And to me, it's a rather sensible. And the reason for that is because Pascal was basically DX12. And Maxwell was DX12. And Turing is the world's first DXR, the first ray tracing GPU, brand-new functionality, brand-new API and a lot more performance. And so I think it's sensible that Turing's adoption is going to be rapid.
The second element of Turing is something that we've never talked about before. We're mentioning it more and more because it's such an exciting book market for us is notebooks. The install base of Pascal has a very, very little notebook in it. And the reason for that is because, in the past, we were never able to put a high performance gaming GPU into a thin and light notebook until we invented Max-Q. And in combination with our energy efficiency, we were able to -- we're now able to put a 2080 into a laptop, and it's still beautiful. And so this is effectively a brand-new growth market for us. And with so few people and so few gamers in the world that are able to game on a laptop, I think this is going to be a nice growth market for us.
And then the new market that we introduced and launched this last quarter is called RTX Studio. And this is an underserved segment of the market where consumers, enthusiasts, they could be artists that are working on small firms, they need powerful computers to do their work. They need powerful computers to do rendering and high-definition video editing. And yet it's underserved by workstations because workstations are really sold on a B2B basis into large enterprises. And so we aligned all of the OEMs and created a whole new line of notebooks called RTX Studio. And the enthusiasm has been great. We've launched 27 different laptops, and I'm looking forward to seeing the results of that. This is tens of millions of people who are creators. Some of them professionals, some of them hobbyists. And they use Adobe suites, they use Autodesk in their suites and some of them use SolidWorks and some of them use all kinds of renders, like blender. And these are 3D artists and video artists, and this digital content creation is the modern way of creativity. And so this is an underserved market that we're excited to go serve with RTX Studio.

--------------------------------------------------------------------------------
Operator    [32]
--------------------------------------------------------------------------------

          And your last question comes from the line of Stacy Rasgon with Bernstein Research.

--------------------------------------------------------------------------------
Stacy Aaron Rasgon,  Sanford C. Bernstein & Co., LLC., Research Division - Senior Analyst    [33]
--------------------------------------------------------------------------------

          I have 2 for Colette. My first question is on data center. So I know you say that you have a broad-based growth except for a few hyperscalers. But you only grew at 3% sequentially, about $20 million. That doesn't sound like broad-based growth to me unless like -- did the hyperscalers get worse? Or are they just still so much bigger than like the rest of it? I guess, what's going on in data center? How do I wrap my head around like broad-based growth with relatively minimal growth observed?

--------------------------------------------------------------------------------
Colette M. Kress,  NVIDIA Corporation - Executive VP & CFO    [34]
--------------------------------------------------------------------------------

          So to answer your question here, Stacy, on what we refer to when we're discussing the broad-based growth is the substantial expansion that we have on the types of customers and the industries that we are now approaching. As you know, even a year ago, we had a very, very small base in terms of industry-based hyper -- excuse me, industry-based AI workloads that they were using. Over this last quarter, we're continuing to see strong growth as we roll out all different types of AI solutions, both across the U.S. and worldwide, to these overall customers. Our hyperscalers, again, a couple of them, not necessarily growing. Some of them are flat and some of them are growing depending on whether or not that's for cloud instances or whether or not they're using it for internal use. So we believe that our continued growth with the industries is important for us for the long term to expand the use of AI, and we're just really pleased with what we're seeing in that growth this quarter.

--------------------------------------------------------------------------------
Operator    [35]
--------------------------------------------------------------------------------

          I'll now turn the call back over to Jensen for any closing remarks.

--------------------------------------------------------------------------------
Jen-Hsun Huang,  NVIDIA Corporation - Co-Founder, CEO, President & Director    [36]
--------------------------------------------------------------------------------

          Thanks, everyone. We're happy with our results this quarter and our return to growth across our platforms. Gaming is doing great. It's great to see NVIDIA RTX reinvigorating the industry. GeForce has several growth drivers. Ray traced games continue to gain momentum. A large number of gaming laptops are rolling out, and our new Studio platform is reaching the large underserved community of creators. Outside a few hyperscalers, we're seeing broad-based growth in data centers. AI is the most powerful technology force of our time and a once-in-a-lifetime opportunity. More and more enterprises are using AI to create new products and services while leveraging AI to drive ultra-efficiency and speed in their business. And with hyperscalers racing to harness recent breakthroughs in conversational AI, we see growing engagements in training as well as interactive conversational inference. RTX, CUDA accelerated computing, AI, autonomous vehicles, the work we're doing is important, impactful and incredibly fun. We're just grateful there is so much of it. We look forward to updating you on our progress next quarter.

--------------------------------------------------------------------------------
Operator    [37]
--------------------------------------------------------------------------------

          This concludes today's conference call. You may now disconnect.







--------------------------------------------------------------------------------
Definitions
--------------------------------------------------------------------------------
PRELIMINARY TRANSCRIPT: "Preliminary Transcript" indicates that the 
Transcript has been published in near real-time by an experienced 
professional transcriber.  While the Preliminary Transcript is highly 
accurate, it has not been edited to ensure the entire transcription 
represents a verbatim report of the call.

EDITED TRANSCRIPT: "Edited Transcript" indicates that a team of professional 
editors have listened to the event a second time to confirm that the 
content of the call has been transcribed accurately and in full.

--------------------------------------------------------------------------------
Disclaimer
--------------------------------------------------------------------------------
Thomson Reuters reserves the right to make changes to documents, content, or other 
information on this web site without obligation to notify any person of 
such changes.

In the conference calls upon which Event Transcripts are based, companies 
may make projections or other forward-looking statements regarding a variety 
of items. Such forward-looking statements are based upon current 
expectations and involve risks and uncertainties. Actual results may differ 
materially from those stated in any forward-looking statement based on a 
number of important factors and risks, which are more specifically 
identified in the companies' most recent SEC filings. Although the companies 
may indicate and believe that the assumptions underlying the forward-looking 
statements are reasonable, any of the assumptions could prove inaccurate or 
incorrect and, therefore, there can be no assurance that the results 
contemplated in the forward-looking statements will be realized.

THE INFORMATION CONTAINED IN EVENT TRANSCRIPTS IS A TEXTUAL REPRESENTATION
OF THE APPLICABLE COMPANY'S CONFERENCE CALL AND WHILE EFFORTS ARE MADE TO
PROVIDE AN ACCURATE TRANSCRIPTION, THERE MAY BE MATERIAL ERRORS, OMISSIONS,
OR INACCURACIES IN THE REPORTING OF THE SUBSTANCE OF THE CONFERENCE CALLS.
IN NO WAY DOES THOMSON REUTERS OR THE APPLICABLE COMPANY ASSUME ANY RESPONSIBILITY FOR ANY INVESTMENT OR OTHER
DECISIONS MADE BASED UPON THE INFORMATION PROVIDED ON THIS WEB SITE OR IN
ANY EVENT TRANSCRIPT. USERS ARE ADVISED TO REVIEW THE APPLICABLE COMPANY'S
CONFERENCE CALL ITSELF AND THE APPLICABLE COMPANY'S SEC FILINGS BEFORE
MAKING ANY INVESTMENT OR OTHER DECISIONS.
--------------------------------------------------------------------------------
Copyright 2019 Thomson Reuters. All Rights Reserved.
--------------------------------------------------------------------------------