video_id
stringclasses
10 values
transcript_chunk
stringlengths
830
2.96k
lXLBTBBil2U
Jensen this is such an honor thank you for being here I'm delighted to be here thank you in honor of your return to Stanford I decided we'd start talking about the time when you first left you joined LSI logic and that was one of the most exciting companies at the time you're building a phenomenal reputation with some of the biggest names in Tech and yet you decide to leave to become a Founder what motivated you uh uh Chris and Curtis Chris and Curtis uh uh I was an engineer at LS logic and Chris and Curtis were at Sun and I was working with with uh some of the brightest Minds in computer science at the time of all time uh including andyto shim and others uh building building workstations and Graphics workstations and so on so forth and uh Chris and Curtis uh uh said one day that they like to leave some son and they like uh me to go figure out what they're going to go leave four and and um I had a great job but they they insisted that I uh figure out you know with them how to how to build a company and so so we hung out at Denny when whenever they Dro by and and uh uh which was which is by the way my alma marter my my first company uh you know my first job before for before CEO was a was a dishwasher and so and and I did that very well and and so anyways uh we got together and and we we DEC and it was during the the microprocessor Revolution this is 1993 and and 1992 when we were getting together the PC Revolution was just getting going you you know that Windows 95 obviously which is the Revolutionary version of Windows uh didn't even come to the market yet and Pentium wasn't even announced yet and so and this is this is all before the right before the PC Revolution and it was it was pretty clear that that uh the microprocessor was going to be very important and we we thought you know why don't we build a company uh to go solve problems that a normal computer that is powered by general purpose Computing can't and and so that that became the company's Mission uh to go to go build a computer uh the type of computers and solve problems that normal computers can't and to this day uh we're focusing on that and if you look at all the the problems that that um and the markets that we opened up as a result uh it's you know things like uh computational drug design um uh weather simulation materials design these are all things that we're really really proud of uh robotics uh self-driving cars uh autonomous
lXLBTBBil2U
thought you know why don't we build a company uh to go solve problems that a normal computer that is powered by general purpose Computing can't and and so that that became the company's Mission uh to go to go build a computer uh the type of computers and solve problems that normal computers can't and to this day uh we're focusing on that and if you look at all the the problems that that um and the markets that we opened up as a result uh it's you know things like uh computational drug design um uh weather simulation materials design these are all things that we're really really proud of uh robotics uh self-driving cars uh autonomous autonomous uh software we call artificial intelligence and then all you know of course uh we uh we drove the the uh U the techn techology so hard that that eventually the computational cost uh uh went to approximately zero and then enabled enabled a whole new way of developing software where the computer wrote the software itself artificial intelligence as we know it today and so so I that was that was it that was the journey yeah thank you all for coming well these applications are on all of our minds today but back then the CEO of LSI logic convinced his biggest investor Don Valentine to meet with you he is obviously the founder of seoa yeah now I can see a lot of Founders here edging forward in anticipation but how did you convince the most sought-after investor in Silicon Valley to invest in a team of firsttime Founders building a new product for a market that doesn't even exist I I didn't know how to write a business plan and and uh uh so I went to a went to a book bookstore and back then there were bookstores and and and um in the business book section there was this book and it was written by somebody I knew Gordon Bell and this book I should go find it again but it's a very large book and the book says how to write a business plan and and that was you know a highly specific title for a very niche market and it seems like he wrote it for like you know 14 people and I was one of them and and so I I bought the book I I should have known right away that that it was a bad idea because that you know Gordon is super super smart and super smart people have a lot to say and and they wanted you know and I I'm pretty sure Gordon wants to teach me how to write a business plan uh completely and so I I picked up this book it's like 450 pages long well I never got through it not even close I I flipped through it a few pages
lXLBTBBil2U
that was you know a highly specific title for a very niche market and it seems like he wrote it for like you know 14 people and I was one of them and and so I I bought the book I I should have known right away that that it was a bad idea because that you know Gordon is super super smart and super smart people have a lot to say and and they wanted you know and I I'm pretty sure Gordon wants to teach me how to write a business plan uh completely and so I I picked up this book it's like 450 pages long well I never got through it not even close I I flipped through it a few pages and I go you know what by the time I'm done reading this this thing I'll be out of business I'll be out of money and and uh Lori and I only had about 6 months uh in the bank and we had already Spencer Madison and and uh and a dog and so the five of us had to live off of you know uh whatever money we had in the bank and and so I didn't have much time uh and so instead of writing the business plan uh I just went to talk to to W Coran he turn he called me one day and said hey you know you left the company you didn't even tell me what you were doing I want you to come back and explain it to me and so I went back and I explained it to Wi and wi wi at the end of it he he said I have no idea what you said and and um that's one of the worst elevator pitches I've ever heard um and then he picked up the phone and he called Don Valentine and he he called Don and he says Don I want you to give I'm going to send a kid over I want you to give him money he's one of the best employees l logic ever ever had and um I and and so the thing I learned is is uh uh you you can make up a great interview you could even have a bad interview but you can't run away from your past and so have a good past you know try to have a good past and and and in a lot of ways I was serious when I said I was a good dishwasher I was probably Denny's best dishwasher um I I planned my work I was organized you know I was Misan plus and then I washed The Living Daylights out of the dishes and then and then you know they promoted me to bus I was certain I'm the best bus boy Denny's ever had you know I was I never left a station with
lXLBTBBil2U
can make up a great interview you could even have a bad interview but you can't run away from your past and so have a good past you know try to have a good past and and and in a lot of ways I was serious when I said I was a good dishwasher I was probably Denny's best dishwasher um I I planned my work I was organized you know I was Misan plus and then I washed The Living Daylights out of the dishes and then and then you know they promoted me to bus I was certain I'm the best bus boy Denny's ever had you know I was I never left a station with empty-handed I never came back empty-handed I was very efficient and then they and so anyways eventually I became you know a CEO I'm working I'm still working on being being a good CEO but you talk about being the bad you needed to be the best among 89 other companies that were funded after you to build the same thing and then with 6 to9 months of Runway left you realized that the initial Vision was just not going to work MH how did you decide what to do next to save the company when the cards were so stacked against you well we started uh this company called for Accelerated Computing and the question is what is it for what's the killer app and and uh that was that that came our first great decision um and this is what sequa funded the first great decision was the first killer app was going to be 3D graphics and the the the technology was going to be 3D graphics and the application was going to be video games at the Time 3D Graphics was impossible to make cheap it was Million dooll image generators from Silicon graphics and the video and so it was a million dollars and and it's hard to make cheap um and the video game Market was0 billion doar so you have this incredible technology that's hard to uh commoditize and commercialize and then you have this Market that doesn't exist that was that intersection was the founding of our company and and I still remember uh when when Don at the end of my presentation uh you know Don was still kind of he he said you know know one of the things he said to me which made a lot of sense back then makes a lot of sense today he says startups don't invest in startups or startups don't partner with startups and his point is that in order for NVIDIA to succeed we needed another startup to succeed and that other startup was Electronic Arts and and then on the way out he he reminded me that electronic arts's
lXLBTBBil2U
and commercialize and then you have this Market that doesn't exist that was that intersection was the founding of our company and and I still remember uh when when Don at the end of my presentation uh you know Don was still kind of he he said you know know one of the things he said to me which made a lot of sense back then makes a lot of sense today he says startups don't invest in startups or startups don't partner with startups and his point is that in order for NVIDIA to succeed we needed another startup to succeed and that other startup was Electronic Arts and and then on the way out he he reminded me that electronic arts's CTO is 14 years old and had to be driven to work by his mom and he just wanted to remind me that that's who I'm relying on that that and then and uh and then after that he said if you lose my money I'll kill you and that that was that was kind of my memories of that first meeting uh but nonetheless uh we created we created something uh we went on uh the next several years to go create the market to create the gaming market for PCs and it took a long time to do so we're still doing it today uh we realize that not only do you have to create the technology and uh invent a new way of doing computer Graphics so that what was a million dollars is now you know three 400 $500 um that fits in the computer and you have to go create this new market so we have to create technology create markets the idea that a company would create technology create markets defines Nvidia today almost everything we do we create technology we create markets that's that's the reason why people say we have a you know people call it a stack an ecosystem words like that um but that's basically it at the core for 30 years what Nvidia realized we had to do is in order to uh create the conditions by which somebody could buy our products we had to go invent this new market and uh it's the reason why we were early in autonomous driving it was the reason why we're early in deep learning it was the reason why we're early and just about all these things including uh computational drug disc drug design and and Discovery um all these different areas we're trying to create the market while we're creating the technology and so that that's um uh okay and then we got we got going and and then and then um Microsoft introduced uh a standard called direct 3D and that spawned off hundreds of companies and we found ourselves a couple years later competing with just about everybody and and the thing that that we
lXLBTBBil2U
go invent this new market and uh it's the reason why we were early in autonomous driving it was the reason why we're early in deep learning it was the reason why we're early and just about all these things including uh computational drug disc drug design and and Discovery um all these different areas we're trying to create the market while we're creating the technology and so that that's um uh okay and then we got we got going and and then and then um Microsoft introduced uh a standard called direct 3D and that spawned off hundreds of companies and we found ourselves a couple years later competing with just about everybody and and the thing that that we invented the company the technology we invented uh 3D graphics with the consumerized 3D with turns out to be incompatible with direct 3D so we started this company we had this 3D Graphics thing we million-dollar thing we're trying to make it consumerized and so we invented all this technology and then shortly after it became incompatible and um uh so we had to reset the company uh or go out of business but we didn't know how to we didn't know how to build it the way that Microsoft had defined it and um and I remember I remember a meeting at at you know on a weekend and the conversation was you know we now have 89 competitors uh I understand that the way we do it is not not right but we don't know how to do it the right way and and um thankfully there was another bookstore and um and the bookstore is called fries Fries electronics I don't think I don't know if it's still here um and so I had I had I had um I I I think I drove madis and my daughter on a weekend to fries and and it was sitting right there the openg manual uh which would defined uh how silicon Graphics did computer graphics and so it was it was right there it was like $68 a book and so I had a couple hundred dollar I bought three books I took it back to the office and I said guys I found it our future and I handed out I had three versions of it handed out had a big nice centerfold you know the centerfold is the opengl pipeline which is the computer Graphics Pipeline and um uh and I handed it to uh the same Geniuses that I founded the company with and we implemented the openg pipeline like nobody had ever implemented the opengl pipeline and we built something the world never seen and so uh a lot of lessons are right there that moment in time for our company uh gave us so much confidence and the reason for that
lXLBTBBil2U
couple hundred dollar I bought three books I took it back to the office and I said guys I found it our future and I handed out I had three versions of it handed out had a big nice centerfold you know the centerfold is the opengl pipeline which is the computer Graphics Pipeline and um uh and I handed it to uh the same Geniuses that I founded the company with and we implemented the openg pipeline like nobody had ever implemented the opengl pipeline and we built something the world never seen and so uh a lot of lessons are right there that moment in time for our company uh gave us so much confidence and the reason for that is you can succeed in doing something inventing a future even if you were not informed about it at all and is kind of the my attitude about everything now you know when somebody tells me about something and I've never heard of it before or if I've heard of it never don't understand how it works at all my first thought is always you know how hard can it be and it's probably just a textbook away you know you're probably one archive paper away from figuring this out and so I spent a lot of time reading archive papers and um and it it's true it's true you can you can um now of course you can't learn how somebody else does something and do it exactly the same way and hope to have a different outcome but you could learn how something can be done and then go back to First principles and ask yourself um giving the conditions today given my motivation given the instruments the tools um given you know how things have changed how would I redo this how would I reinvent this whole thing how would I design a how would I build a car today would I build it incrementally from 1950s and 1900s how would I build a computer today how would I write software today does that make sense and so I go back to First principles all the time uh even in the company today and just reset ourselves you know because the world has changed and U the way we wrote software in the past was monolithic and it's designed for supercomputers but now it's disaggregated it's you know so on so forth and how we think about software today how we think about computers today how we think just always cause your company always cause yourself to go back to first first principles and it creates lots and lots of opportunities yeah the way you applied this technology turns to be revolutionary you get all the momentum that you need to IPO and then some more because you grow your Revenue nine times in the next four years but in
lXLBTBBil2U
the company today and just reset ourselves you know because the world has changed and U the way we wrote software in the past was monolithic and it's designed for supercomputers but now it's disaggregated it's you know so on so forth and how we think about software today how we think about computers today how we think just always cause your company always cause yourself to go back to first first principles and it creates lots and lots of opportunities yeah the way you applied this technology turns to be revolutionary you get all the momentum that you need to IPO and then some more because you grow your Revenue nine times in the next four years but in the middle of all of this success you decide to Pivot a little bit the focus of innovation happening at Nvidia based on a phone call you have with this chemistry professor can you tell us about that phone call and how you connected the dots from what you heard to where you went uh remember at the core the company was uh pioneering a new way of doing Computing computer Graphics was the first application uh but we already always knew that there would be other applications and so image processing came particle physics came fluids came so on so forth all kinds of interesting things that we wanted to do uh we made the processor more programmable so that we could express more algorithms if you will and then one day we invented um uh programable shaders which made all forms of Imaging and computer Graphics programmable that was a great breakthrough so we invented Ed that on top of that we invented uh we we tried to look for ways to express um uh more comp more sophisticated algorithms uh that could be computation that could be computed on our processor which is very different than a CPU and so we we created this thing called CG this I think it was 2003 or so C for gpus it predated Cuda by about three years um the same person who wrote The textbook that saved the company Mark Hilgard wrote that textbook and um I and so CG was was super cool we wrote textbooks about it we started teaching people how to use it we developed tools and such um and then several several researchers discovered it uh many of the researchers here students here at Stanford was using it um many of the the engineers that that then became uh engineers at Nvidia were were uh playing with it uh uh a doctor a couple of doctors at at Mass General picked it up and used it for uh CT reconstruction so I flew out and saw them and said you know what are you guys doing with this thing and uh they told me about that and then and then uh a um uh a computational a Quantum chemist
lXLBTBBil2U
I and so CG was was super cool we wrote textbooks about it we started teaching people how to use it we developed tools and such um and then several several researchers discovered it uh many of the researchers here students here at Stanford was using it um many of the the engineers that that then became uh engineers at Nvidia were were uh playing with it uh uh a doctor a couple of doctors at at Mass General picked it up and used it for uh CT reconstruction so I flew out and saw them and said you know what are you guys doing with this thing and uh they told me about that and then and then uh a um uh a computational a Quantum chemist uh used it to um uh Express his his algorithms and so I I realized that that there's there's some evidence that people might want to use this uh and and it gave it gave us gave us you know incrementally more more confidence that that we ought to go do this that that this field this form of computing could solve problems that normal computers really can't and and um reinforced our belief and and kept us going every time you heard something new you really savored that surprise and that seems to be a theme throughout your leadership at Nvidia U it feels like you make the these bets so far in advance of Technology inflections that when the Apple finally falls from the tree you're standing right there in your black leather jacket waiting to catch it how do you find the conv always seems like a diving catch oh it does seem like a diving catch you do things based on core beliefs you know we we uh we we deeply believe that that we uh we could create a computer that solves problems Norm processing can't do that there are limits to what a CPU can do there are limits to what general purpose Computing can do and then there are interesting problems uh that we can go solve the question the question is always are those in interesting problems only or are they can they also be interesting markets because if they're not interesting markets it's not sustainable and Nvidia went through about a decade where we were investing in this future and the markets didn't exist there was only One Market at the time was computer Graphics uh for 10 15 years the markets that fuels Nvidia today just didn't exist and so so how do you continue um uh with all of the people around you you know our company and you know nvidia's management team and all of the amazing Engineers that they're creating this future with me um all of your shareholders your board of directors all your partners you're you're taking everybody with you and there's no evidence uh of a market that is really
lXLBTBBil2U
're not interesting markets it's not sustainable and Nvidia went through about a decade where we were investing in this future and the markets didn't exist there was only One Market at the time was computer Graphics uh for 10 15 years the markets that fuels Nvidia today just didn't exist and so so how do you continue um uh with all of the people around you you know our company and you know nvidia's management team and all of the amazing Engineers that they're creating this future with me um all of your shareholders your board of directors all your partners you're you're taking everybody with you and there's no evidence uh of a market that is really really challenging you know the fact that the technology can solve problems and the fact that you have research papers that that are used that that are made possible because of it are interesting but you're always looking for that market but nonetheless before a market exists you still need early indicators of future success you know we we have this phrase in the company is is you know there's a phrase called key performance indicators unfortunately kpis are hard to understand I find kpis hard to understand what's a good kpi you know a lot of people you know when when we look for kpis we go gross margins that's not a kpi that's a result you know you're looking for something that's an early indicators of future positive results okay and as early as possible and the reason for that is because you want early indic that early sign that you're going in the right direction and so we have this phrase is called EO ifs FS you know early indicators e FS early indicators of future success and and um and it helps people uh uh because I was using it all the time to give the company hope that hey look we solved this problem we solved that problem we solved this problem the markets didn't exist but there were important problems and that's what the company's about to solve these problems uh we want to be sustainable and therefore the markets have to exist at some point but you you want you want to decouple the result from um uh from evidence that you're doing the right thing okay and so so so that's how you that's how you kind of solve this problem of investing into something that's very very far away um and having the the conviction uh to stay on the road is to find as early as possible the indicators that you're doing the right things and so uh start with a core belief unless something you know changes your mind you continue to believe in it and um look for early indicators of future success what are
lXLBTBBil2U
problems uh we want to be sustainable and therefore the markets have to exist at some point but you you want you want to decouple the result from um uh from evidence that you're doing the right thing okay and so so so that's how you that's how you kind of solve this problem of investing into something that's very very far away um and having the the conviction uh to stay on the road is to find as early as possible the indicators that you're doing the right things and so uh start with a core belief unless something you know changes your mind you continue to believe in it and um look for early indicators of future success what are some of those early indicators that have been used by product teams at Nvidia uh all kinds um uh uh I saw I saw I saw a uh a paper uh long before I saw the paper I met some people that needed my help on on um uh on this thing called Deep learning at a time I didn't even know what deep learning Le was and um and they needed us to create a domain specific language so that um all of their algorithms could be expressed easily on our on our processors and we created this thing called cdnn and it's essentially the SQL um uh SQL is in in storage Computing this is um neuron network computing and uh we created a a language if you will domain specific language for that you know kind of like the openg GL of of uh deep learning and so we we uh they needed us to do that so that they they could express their mathematics and uh they didn't understand Cuda but they understood their deep learning and so we created this thing in the middle for them uh and the reason why we did it was because uh even though there were zero I mean this you know these researchers had no money uh and and this is kind of one of the the great skills of our company that that you're willing to do something even though the financial returns are complet completely non-existent or maybe very very far out even if you believed in it uh we we ask ourselves you know is this worthy work to do um does this Advance a field of science somewhere that matters notice this is something that I I've been talking about you know since the very beginning of time uh we ex we we find inspiration uh not from the size of a market from but from the importance of the work uh because the importance of the work is the early indicators of a future Market and nobody has to write a nobody has to do a a um a business case on it nobody has to show me a a pnl uh nobody has to show
lXLBTBBil2U
non-existent or maybe very very far out even if you believed in it uh we we ask ourselves you know is this worthy work to do um does this Advance a field of science somewhere that matters notice this is something that I I've been talking about you know since the very beginning of time uh we ex we we find inspiration uh not from the size of a market from but from the importance of the work uh because the importance of the work is the early indicators of a future Market and nobody has to write a nobody has to do a a um a business case on it nobody has to show me a a pnl uh nobody has to show me a financial forecast the only question is is this important work and if we didn't do it uh would it happen without us now if we didn't do something and something could happen without us it gives me tremendous Joy actually and the reason for that is could you imagine the world got better you didn't have to lift a finger that's the definition of you know of of uh ultimate laziness and and and in a lot of ways in a lot of ways you want that habit and the reason for that is this uh you want the company to be lazy about doing things that other people always do can do if somebody else can do it let them do it we should go select the things that if we didn't do it the world the world would fall apart you have to convince yourself of that that if I don't do this it won't get done that is Inc and and if that work is hard and that work is impactful and important then it gives you a sense of purpose does that make sense and so our company has been selecting these projects deep learning was just one of them and the first indicator of of the success of that was this you know fuzzy cat that that Andrew an came up with and um then Alex kvki uh detected cats um you know not all the time but you know successfully enough that it was you know this might take us somewhere and then we reasoned about the structure of deep learning and you know we're computer scientists and we understand how things work and and so we we uh we convinced ourselves this could change everything and and um and anyhow that but that's an that's an example so these selections that you've made they've paid huge dividends both literally and figuratively um but you've had to steer the company through some very challenging times like when it lost 80% of its market cap amid the financial crisis cuz what Wall Street didn't believe in your bet on ML um in times like these
lXLBTBBil2U
you know successfully enough that it was you know this might take us somewhere and then we reasoned about the structure of deep learning and you know we're computer scientists and we understand how things work and and so we we uh we convinced ourselves this could change everything and and um and anyhow that but that's an that's an example so these selections that you've made they've paid huge dividends both literally and figuratively um but you've had to steer the company through some very challenging times like when it lost 80% of its market cap amid the financial crisis cuz what Wall Street didn't believe in your bet on ML um in times like these how do you steer the company and keep the employees motivated at the task at hand uh it's this is the my reaction during that time is the same reaction I had about this week uh earlier today you asked me about this week my pulse was exactly the same this week is no different than last week or the week before that um and so the opposite of that you know when you drop it 80% um it don't get me wrong when when your share price drops 80% it's a little embarrassing okay and and um you just want to you just want to wear a t-shirt that says wasn't my fault um but even more than that you just you just don't want to you you don't want to get out of your bed you don't want to leave the house um all of that is true all of that is true um but then you go back to go back to just doing your job I woke up at the same time I prioritize my day in the same way uh I go back to what do I believe uh you got to gut check always gut check back to the court you know what do you believe uh what are the most important things uh and uh just check them off you know sometimes sometimes it's helpful to you know family loves me okay check um you know double you know right and so you just got to check it off and and you go back to your core um and then go back to work and and then every conversations go back to the core uh keep the company focused back on the core do you believe in it did something change the stock price changed but did something else change the physics change the gravity change did did all of the things that that that we assumed uh that we believed that led to our decision did any of those things change because if those things change you got to change everything but if none of those things change you change nothing you keep on going yeah yeah that's how you do it in
lXLBTBBil2U
you know right and so you just got to check it off and and you go back to your core um and then go back to work and and then every conversations go back to the core uh keep the company focused back on the core do you believe in it did something change the stock price changed but did something else change the physics change the gravity change did did all of the things that that that we assumed uh that we believed that led to our decision did any of those things change because if those things change you got to change everything but if none of those things change you change nothing you keep on going yeah yeah that's how you do it in speaking with your employees they say that you try to avoid the public in speaking with your employees they've said that your leadership including the employees I'm just kidding no le lead leaders have to be seen unfortunately that's the hard that's the hard part you know I I I was I was I was at I was I was an electrical engineering student and I was quite Young when I went to school um when I went to went to College I was I was still 16 years old and so I was I was young when I did everything and and so I was a bit of an introvert kind of you know I'm shy I don't enjoy public speaking I'm delighted to be here I'm not suggesting um but but it's it's not something that I do naturally and and um I and so so when when things are challenging um uh it's not easy to be in front of precisely the people that you care most about you know and the reason for that is because could you imagine a company meeting we just our stock prices dropped by 80% and the most important thing I have to do as the CEO is this to come and face you explain it and partly you're not sure why partly you're not sure how long uh how bad yeah you just don't know these things and and but you still got to explain it face face all these people and you know what they're thinking you know you you know some of them are probably thinking we're doomed uh some people are probably thinking you're an idiot and some people are probably thinking you know something else and so I um there are a lot of things that people are thinking and you know that they're thinking those things but you still have to get in front of them and and and deal you know do the hard work they may be thinking of those things but yet not a single person of your leadership team left during times like this and in fact unemployable that's what I keep reminding
lXLBTBBil2U
you still got to explain it face face all these people and you know what they're thinking you know you you know some of them are probably thinking we're doomed uh some people are probably thinking you're an idiot and some people are probably thinking you know something else and so I um there are a lot of things that people are thinking and you know that they're thinking those things but you still have to get in front of them and and and deal you know do the hard work they may be thinking of those things but yet not a single person of your leadership team left during times like this and in fact unemployable that's what I keep reminding them I'm just kidding I'm surrounded by Geniuses I'm surrounded by Geniuses yeah other Geniuses un un unbelievable uh Nvidia is well known to have singularly the best management team on the planet this is the deepest technology management team the world's ever seen I'm surrounded by a whole bunch of them and they're just genius business teams marketing teams sales teams just incredible and engineering teams my research teams unbelievable yeah your employees say that your leadership style is very engaged you have 50 direct reports you encourage people across all parts of the organization to send you the top five things on their mind and you constantly remind people that no task is beneath you can you tell us why you've purposefully designed such a flat organization and how should we be thinking about our organizations that we designed in the future uh no task is is to me no task is beneath me because remember I used to be a dishwasher and I and I mean that I used to clean toilets I mean you know I cleaned a lot of toilets I've cleaned more toilets than all of you combined and and some of them just can't unsee I don't know I I don't know what to tell you you know that's life and and so so uh uh you can't show me and you can't show me a task that is that's beneath me um now I'm not doing it I'm not doing it uh only because because of uh you know whether it's beneath me or not beneath me U if you send me something and you want my input on it and I can be of service to you and in my in my review of IT share with you how I reason through it uh I've made a contribution to you I've made I've made it possible for you to see how I reason through something and and by reasoning as you know how someone reasons through something empowers you you go oh my gosh that's how you reason through something like this it's
lXLBTBBil2U
is that's beneath me um now I'm not doing it I'm not doing it uh only because because of uh you know whether it's beneath me or not beneath me U if you send me something and you want my input on it and I can be of service to you and in my in my review of IT share with you how I reason through it uh I've made a contribution to you I've made I've made it possible for you to see how I reason through something and and by reasoning as you know how someone reasons through something empowers you you go oh my gosh that's how you reason through something like this it's not as complicated as it seems this is how you reason through something that's super ambiguous this is how you reason through something that's incalculable this is how you reason through something that you know seems to be very scary this is how you seem do you understand and so I show people how to reason through things all the time strategy things you know how to forecast something how to break a problem down uh and you're just you're empowering people all over the place and so that's how I see it if you send me something you want me to help review it uh I'll do my best and I'll show you how I would do it um I in the process of doing that of course I learned a lot from you is that right you gave me a seat of a lot of information I learned a lot and so I I feel rewarded by the process um it does take a lot of energy sometimes because you know you got in order to add value to somebody and they're incredibly smart as a starting point and I'm surrounded by incredibly smart people you have to at least get to their plane you know you have to get into their head space and that's really hard that's really hard um and that takes just an enormous amount of emotional and intellectual energy and so I feel exhausted after after I I work on things like that um I'm surrounded by by a lot of great people a CEO should have the most direct report rep s um uh by definition because the people that reports to the CEO requires the least amount of management it makes no sense to me that CEOs have so few people reporting to them except for one fact that I know to be true the the knowledge the information of a CEO is supposedly so so valuable so secretive you can only share with two other people or three and their information is so invaluable so incredibly secretive that they can only share with a couple more well um I don't believe in in in a culture an environment
lXLBTBBil2U
after I I work on things like that um I'm surrounded by by a lot of great people a CEO should have the most direct report rep s um uh by definition because the people that reports to the CEO requires the least amount of management it makes no sense to me that CEOs have so few people reporting to them except for one fact that I know to be true the the knowledge the information of a CEO is supposedly so so valuable so secretive you can only share with two other people or three and their information is so invaluable so incredibly secretive that they can only share with a couple more well um I don't believe in in in a culture an environment where the information that you possess is the reason why you have power I would like us all to to to contribute to the company and our position in the company should have something to do with our ability to reason through complicated things lead other people to um achieve greatness um Inspire Empower other people um support other people those are the reasons why the the management team exists in service of all of the other people that work in the company to create the conditions by which all of the all of these amazing people who volunteer to come work for you instead of all the other amazing high-tech companies around the world they elected they volunteer to work for you and so you should create the conditions by which they could do their life's work which is Mission you know you probably heard it i' I've said that you know pretty clearly and I and I believe that what my job is is very simply to create the conditions by which you could do your life's work and so how do I do that what does that condition look like well that condition should um result in great deal of empowerment you should you can only be empowered if you understand the circumstance isn't it right you have to understand the cont you have to understand the context of the situation you're in in order for you to come up with great ideas and so I have to create a circumstance where you understand the context which means you have to be informed and the best way to be informed is for there to be as little layers of information mutilation right between us and so that's the reason why it's very often that I'm reasoning through things like in an audience like this I say first of all this is the beginning facts these are the data that we have um this is how I would reason through it these are some of the assumptions these are some of the unknowns these are some of the knowns and so you reason through it and now you've created an organization that's highly empowered
lXLBTBBil2U
and so I have to create a circumstance where you understand the context which means you have to be informed and the best way to be informed is for there to be as little layers of information mutilation right between us and so that's the reason why it's very often that I'm reasoning through things like in an audience like this I say first of all this is the beginning facts these are the data that we have um this is how I would reason through it these are some of the assumptions these are some of the unknowns these are some of the knowns and so you reason through it and now you've created an organization that's highly empowered nvidia's 30,000 people we're the smallest large company in the world we're tiny little company but every employee is so empowered and they're making smart decisions on my behalf every single day and the reason for that is because you know they understand that they understand my condition they understand my condition I'm very transparent with people um and uh and I believe that that I can trust you with the information often times the information is hard to hear and uh the the situations are complicated uh but I trust that you can handle it you're you know a lot of people hear me say you know these you're adults here you can handle this sometimes they're not really adults they just graduated I'm just kidding I know that when I first graduated was barely an adult and um I but I was I was fortunate that I was trusted with with uh with uh important information so I want to do that I want to create the conditions for people to do that I do want to now address the topic that is on everybody's mind AI last week you said that generative Ai and accelerated Computing have hit the Tipping Point so as this technology becomes more mainstream what are the applications that you personally are most excited about well you have to go back to First principles and ask yourself what is generative AI what happened um what happened was we have a we now have the ability to have software that can understand something they they can understand why you know what is first of all we digitized everything that was you know like for example Gene sequencing you digitized genes but what does it mean that sequence of genes what does it mean we've digitized amino acids um but what does it mean uh and so we now have the ability we dig digitize words we digitize sounds uh we digitize images and videos we digitize a lot of things but what does it mean we now have the ability through um a lot of study a lot of Da data and from their
lXLBTBBil2U
um what happened was we have a we now have the ability to have software that can understand something they they can understand why you know what is first of all we digitized everything that was you know like for example Gene sequencing you digitized genes but what does it mean that sequence of genes what does it mean we've digitized amino acids um but what does it mean uh and so we now have the ability we dig digitize words we digitize sounds uh we digitize images and videos we digitize a lot of things but what does it mean we now have the ability through um a lot of study a lot of Da data and from their patterns and relationships we We Now understand what they mean not only do we understand what they mean we we can translate between them because we learned about the meaning of these things in the same world we didn't learn about them separately so we we learned about speech and and words and and paragraphs and vocabulary in the same context and so we found correlations between them and they're all you know registered if you will registered to each other and so now we uh not only do we understand uh the modality the meaning of each modality we can understand how to translate between them and so uh for obvious things you could caption video to text that's captioning uh text to uh images M Journey uh text to text chat GPT I amazing things and so so we now we now know that uh we understand meaning and we can translate uh the translation of something is generation of information and and um uh and all of a sudden you you have to take your you take a step back and ask yourself um uh what is the implication in every single layer of everything that we do and so I'm exercising in front of you I'm reasoning in front of you uh the same thing I did a quarter uh 15 years ago when I first saw um uh alexnet some 13 14 years ago I guess um I how I reasoned through it uh what did I see how interesting what can it do very cool but then most importantly what does it mean what does it mean what does it mean to every single layer of computing because you know we're in the world of computing and so what it means is that that the way that we um process information fundamentally will be different in the future that's what Nvidia builds you know chips and system the way we write software will be fundamentally different in the future the type of software we'll be able to write write in the future will be different new applications and then ALS also the processing of those applications will be different what was historically a retrieval
lXLBTBBil2U
reasoned through it uh what did I see how interesting what can it do very cool but then most importantly what does it mean what does it mean what does it mean to every single layer of computing because you know we're in the world of computing and so what it means is that that the way that we um process information fundamentally will be different in the future that's what Nvidia builds you know chips and system the way we write software will be fundamentally different in the future the type of software we'll be able to write write in the future will be different new applications and then ALS also the processing of those applications will be different what was historically a retrieval based model where uh in uh information was pre pre-recorded if you will almost you know we wrote the text pre-recorded and we retrieved it based on uh some recommender system algorithm in the future uh some seed of information will be will be uh the starting point we call them prompts you as you guys know and then we generate the rest of it and so the future of computing will be highly generated well let me give you an example of what's happening for example uh we're having a conversation right now very little of the information I'm trans I'm conveying to you is Retreat most of it is generated it's called intelligence and so in the future we're going to have a lot more generative our computers will will perform in that way it's going to be highly generative instead of Highly retrieval based you go back and you got to ask yourself you know now for for you know entrepreneurs you got to ask yourself uh what industries will be disrupted therefore will we think about networking the same way will we think about storage the same way will we think about would we be as abusive of internet traffic as we are today probably not notice we're having a conversation right now and and I to get in my car every every question so we don't have to be as abusive of of transformation information transporting as we used to um uh what's going to be more what's going to be less uh what kind of applications you know etc etc so you can go through the entire industrial spread and ask yourself what's going to get disrupted what's going to get be different what's going to get NED you know so on so forth and and that reasoning starts from what is happening what is generative AI Foundation Al what is happening go back to First principles with all things there was something I was going to tell you about organization you asked the question and I forgot to answer it the way you create an organization by the way someday um don
lXLBTBBil2U
transformation information transporting as we used to um uh what's going to be more what's going to be less uh what kind of applications you know etc etc so you can go through the entire industrial spread and ask yourself what's going to get disrupted what's going to get be different what's going to get NED you know so on so forth and and that reasoning starts from what is happening what is generative AI Foundation Al what is happening go back to First principles with all things there was something I was going to tell you about organization you asked the question and I forgot to answer it the way you create an organization by the way someday um don't worry about how other companies or charts look you start from first principles remember what an organization is designed to do the organizations of the past where there's a king you know CE and then then you have all all these you know the Royal subjects you know the Royal Court and then eaff and then you keep working your way down eventually they're employees well the reason why it was designed that way is because they they wanted the employees to have as low information as possible because their fundamental purpose of the soldiers is to die in the field of battle to die without asking questions you guys know this I don't I only have 30,000 employees I would like them none of them to die I would like them to question everything does that make sense and so the way you organize in the past and the way you organize today is very different to Second the question is what is nid what does Nvidia build an organization is designed so that we could build what it whatever it is we build better and so if we all build different things why why are we organized the same way why would why would this organizational Machinery be exactly the same irrespective of what you build it doesn't make make any sense you build computers you organize this way you build healthare Services you build exactly the same way it makes no sense whatsoever and so you had to go back to First principles just ask yourself what kind of Machinery what what is the input what is the output what are the properties of this environment you know what what is the what is the what is the forest that this animal has to live in what is this characteristics is it stable most of the time you're trying to squeeze out the last drop of water or is it changing all the time being attacked by everybody and so you got to understand you know you're the CEO your job is to architect this company that's my first job to create the conditions by which you can do your life's work and the architecture
lXLBTBBil2U
whatsoever and so you had to go back to First principles just ask yourself what kind of Machinery what what is the input what is the output what are the properties of this environment you know what what is the what is the what is the forest that this animal has to live in what is this characteristics is it stable most of the time you're trying to squeeze out the last drop of water or is it changing all the time being attacked by everybody and so you got to understand you know you're the CEO your job is to architect this company that's my first job to create the conditions by which you can do your life's work and the architecture has to be right and so you have to go back to First principles and think about those things and I was fortunate that that when I was 29 years old you know I had the benefit of of of taking a step back and asking myself you know how would I build this company for the future and what would it look like and you know what's the operating system which is called culture what do we what kind of behavior do we en encourage enhance and what what do we discourage and not enhance you know so on so forth and anyways I want to save time for audience questions but um this year's theme for view from the top is redefining tomorrow and one question we've asked all of our guests is Jensen as the co-founder and CEO of Nvidia if you were to close your eyes and magically change one thing about tomorrow what would it be were we supposed to think about this in advance I I'm going to give you a horrible answer um I I don't know that it's one thing look there are a lot of things we don't control you know there are a lot of things we don't control um your job is to make a unique contribution live a life of purpose to do something that nobody else in the world would do or can do to make a unique contribution so that in the event that after you done um everybody says you know the world was better because you were here and so I think that that to me um I live I live my life kind of like this I go forward in time and I Look Backwards so you asked me a question that's exactly from a from a computer vision pose perspective exactly the opposite of how I think I never look forward from where I am I go forward in time and look backwards and the reason for that is it's easier I would look backwards and kind of read my history we did this and we did that way and we broke that prom down does that make sense and so it
lXLBTBBil2U
the event that after you done um everybody says you know the world was better because you were here and so I think that that to me um I live I live my life kind of like this I go forward in time and I Look Backwards so you asked me a question that's exactly from a from a computer vision pose perspective exactly the opposite of how I think I never look forward from where I am I go forward in time and look backwards and the reason for that is it's easier I would look backwards and kind of read my history we did this and we did that way and we broke that prom down does that make sense and so it's a little bit like um how you guys solve problems you figure figure out what is the end result that you're looking for and you work backwards to achieve it and so I imagine Nvidia uh making a unique contribution to advancing the the future of of uh of computing which is the single most important instrument of all Humanity now it's not about our self self-importance but this is just what we're good at and it's incredibly hard to do and we believe we can make an absolute unique contribution it's taken US 31 years to be here and we're still just beginning our journey and so this is insanely hard to do and uh uh When I Look Backwards I believe that we made I believe that that we're going to be remembered as a company that kind of changed everything not because we went out and changed everything through all the things that we said but because we did this one thing that was insanely hard to do that we're incredibly good at doing that we loved doing we did for a long time I'm part of the GSP lead I graduated in 2023 so my question is how do you see see your company in the next decade as what challenges do you see your company would face and how you are positioned for that first of all can I just tell you what was going on through my head as you say what challenges the list that flew by my head was so so large uh that that I was trying to figure out what to select um now the honest truth is is that when you ask that question most of the challenges that showed up for me were technical challenges and the reason for that is because that was my morning if you were to you know chosen yesterday um it might have been Market creation challenges there are some markets that I gosh I just desperately would love to create I just can't we just do it already you know but we can't do it alone Nvidia is a technology platform company we're here in service
lXLBTBBil2U
on through my head as you say what challenges the list that flew by my head was so so large uh that that I was trying to figure out what to select um now the honest truth is is that when you ask that question most of the challenges that showed up for me were technical challenges and the reason for that is because that was my morning if you were to you know chosen yesterday um it might have been Market creation challenges there are some markets that I gosh I just desperately would love to create I just can't we just do it already you know but we can't do it alone Nvidia is a technology platform company we're here in service of a whole bunch of other the companies so that they could realize if you will our hopes and dreams through them and and so some of the things that I would love I would love for the world of biology to to be at a point where it's kind of like the world of Chip design 40 years ago computer AED and design um Eda that entire industry really made possible for us today and I believe we're going to make possible for them tomorrow computer AED drug design because we're able to now represent genes and proteins and even cells now very very close to be able to represent and understand the meaning of a cell a combination of a whole bunch of genes um what is a cell mean it's kind of like what does that paragraph mean well if we could understand a a cell like we can understand a paragraph imagine what we could do and so uh so so I'm I'm anxious for that to happen you know I'm kind of excited about that uh there's some that I'm just excited about that I know we around the corner on for example uh humanoid robotics very very close around the corner and the reason for that is because if you can tokenize and understand speech why can't you tokenize and understand uh manipulation and so so these kind of computer science techniques you once you figure something out you ask yourself well if got do that why can't I do that and so I'm excited about those kind of things um and so that challenge is kind of a happy challenge uh some of the some of the other challenges some of the other challenges of course are industrial and geopolitical and they're social and and but you've heard all that stuff before these are all true you know the social issues in in the world uh the geopolitical issues in the world uh why can't we just get along uh things in the world why do I have to say those kind of things in the world um why do we have to say those things and
lXLBTBBil2U
figure something out you ask yourself well if got do that why can't I do that and so I'm excited about those kind of things um and so that challenge is kind of a happy challenge uh some of the some of the other challenges some of the other challenges of course are industrial and geopolitical and they're social and and but you've heard all that stuff before these are all true you know the social issues in in the world uh the geopolitical issues in the world uh why can't we just get along uh things in the world why do I have to say those kind of things in the world um why do we have to say those things and then amplify them in the world uh why do we have to judge people so much in the world uh you you know all those things you guys all know that I don't have to say those things over again my name is Jose I'm a class of the 2023 uh from the GSB my question is uh are you worried at all about the pace at which we're developing AI um and do you believe that any sort of Regulation might be needed thank you uh yeah that's uh the answer is yes and no um we need uh you you know that the the the greatest breakthrough in uh modern AI of course deep learning and it enabled great progress but another incredible breakthrough is something that that humans know and we practice all the time uh and we just invented it for uh for language models called uh grounding reinforcement learning human feedback um I provide reinforcement learning human feedback every day that's my job um and their for their parents in the room uh you're providing reinforcement learning human feedback all the time okay now we just figured out how to do that um at a system systematic level for artificial intelligence there are a whole bunch of other technology necessary to uh guardrail uh fine-tune ground for example how do I generate um how do I generate uh uh uh tokens that obey the laws of physics you know right now things are floating in space and doing things and they don't they don't obey the laws of physics um how do that requires technology Guard railing requires technology fine-tuning requires technology alignment requires technology safety requires technology the reason why planes are so safe is because you know all of the autopilot systems are are surrounded by diversity and redundancy and all kinds of different functional safety and active safety systems that were invented I need all of that to be invented much much faster uh you also know that that the border between security and artificial intelligence cyber security and artificial intelligence is going to become blurry and blurry we need technology to advance very very quickly in the
lXLBTBBil2U
right now things are floating in space and doing things and they don't they don't obey the laws of physics um how do that requires technology Guard railing requires technology fine-tuning requires technology alignment requires technology safety requires technology the reason why planes are so safe is because you know all of the autopilot systems are are surrounded by diversity and redundancy and all kinds of different functional safety and active safety systems that were invented I need all of that to be invented much much faster uh you also know that that the border between security and artificial intelligence cyber security and artificial intelligence is going to become blurry and blurry we need technology to advance very very quickly in the area of cyber security in in order to protect us from artificial intelligence and so so in a lot of ways we need technology to go faster a lot faster okay uh regulation there's two types of Regulation uh there's social regulation I don't know what to do about that but there's product and services regulation know exactly what to do about that okay so um the fa the FAA the FDA the uh Nitsa you name it all the the fs and all the NS and all the you know fcc's the they all have regulations for products and services that are have particular use cases uh um uh bar exams and doctors and you know so on so forth um you all have uh qual qualification exams you all have standards that you have to reach you all have to uh continuously be certified uh accountants and so on so forth whether it's a product or a service there are lots and lots of regulations please do not add a super regulation that cuts across of it the regulator who is regulating accounting should not be the regulator that regulates a doctor you know I love accountants um but I I just you know if I ever need an open heart surgery the fact that they can close books is interesting but not sufficient and so and so I I would like I would like um all of those all of those fields that already have products and services um to also enhance their regulation in context of in the context of AI okay but I left out this one very big one which is this the social implication of AI and how do you how do you deal with that I don't have great answers for that um but you know enough people are talking about it but it's important to subdivide all of this into chunks does that make sense so that we don't we don't become super hyperfocused on this one thing at the expense of a whole bunch of routine things that we could have done and as a result people are getting killed by cars and planes and you know
lXLBTBBil2U
that already have products and services um to also enhance their regulation in context of in the context of AI okay but I left out this one very big one which is this the social implication of AI and how do you how do you deal with that I don't have great answers for that um but you know enough people are talking about it but it's important to subdivide all of this into chunks does that make sense so that we don't we don't become super hyperfocused on this one thing at the expense of a whole bunch of routine things that we could have done and as a result people are getting killed by cars and planes and you know it doesn't make any sense we should make sure that we we do the right things there okay very practical things may I take one more question well we have some rapid fire questions for you as view from the tradition okay I was trying to avoid that okay all right far away far away well your first job was at Denny's they now have a booth dedicated to you what was your fondest memory of working my second job was AMD by the way is there Booth dedicated to me there I'm just kidding um I'm I love my job there I did I love there it's a great company yeah yeah um if there were a worldwide shortage of black leather jackets what would we be see you wearing oh no I've I've got a large reservoir of black jackets I'm the I'll be the only person who is who is not concerned um you spoke a lot about textbooks if you had to write one what would it be called I wouldn't write one you're asking me a hypothetical question that has no possibility of of of uh that's fair and finally if you could share one parting piece of advice to broadcast across Stanford what would it be uh it's not a word but but um I you know have a core belief um gut check it every day I pursue it with all your might pursue it for a very long time surround yourself with people you love and take them on that right so that's the story of Nvidia Jensen this last hour has been a treat thank you for spending thank you very much than
DiGB5uAYKAg
For nearly four decades Moore’s Law has been the governing dynamics of the computer industry which in turn has impacted every industry. The exponential performance increase at constant cost and power has slowed. Yet, computing advance has gone to lightspeed. The warp drive engine is accelerated computing and the energy source is AI. The arrival of accelerated computing and AI is timely as industries tackle powerful dynamics sustainability generative AI and digitalization. Without Moore’s Law, as computing surges, data center power is skyrocketing and companies struggle to achieve Net Zero. The impressive capabilities of Generative AI created a sense of urgency for companies to reimagine their products and business models. Industrial companies are racing to digitalize and reinvent into software-driven tech companies to be the disruptor and not the disrupted. Today, we will discuss how accelerated computing and AI are powerful tools for tackling these challenges and engaging the enormous opportunities ahead. We will share new advances in NVIDIA’s full-stack, datacenter-scale, accelerated computing platform. We will reveal new chips and systems, acceleration libraries, cloud and AI services and partnerships that open new markets. Welcome to GTC! GTC is our conference for developers. The global NVIDIA ecosystem spans 4 million developers, 40,000 companies and 14,000 startups. Thank you to our Diamond sponsors for supporting us and making GTC 2023 a huge success. We’re so excited to welcome more than 250,000 of you to our conference. GTC has grown incredibly. Only four years ago, our in-person GTC conference had 8,000 attendees. At GTC 2023, we’ll learn from leaders like Demis Hassabis of DeepMind Valeri Taylor of Argonne Labs Scott Belsky of Adobe Paul Debevec of Netflix Thomas Schulthess of ETH Zurich and a special fireside chat I’m having with Ilya Sutskever co-founder of OpenAI, the creator of ChatGPT. We have 650 amazing talks from the brightest minds in academia and the world’s largest industries: There are more than 70 talks on Generative AI alone. Other great talks, like pre-trained multi-task models for robotics… sessions on synthetic data generation, an important method for advancing AI including one on using Isaac Sim to generate physically based lidar point clouds a bunch of talks on digital twins, from using AI to populate virtual factories of the future to restoring lost Roman mosaics
DiGB5uAYKAg
fireside chat I’m having with Ilya Sutskever co-founder of OpenAI, the creator of ChatGPT. We have 650 amazing talks from the brightest minds in academia and the world’s largest industries: There are more than 70 talks on Generative AI alone. Other great talks, like pre-trained multi-task models for robotics… sessions on synthetic data generation, an important method for advancing AI including one on using Isaac Sim to generate physically based lidar point clouds a bunch of talks on digital twins, from using AI to populate virtual factories of the future to restoring lost Roman mosaics of the past cool talks on computational instruments, including a giant optical telescope and a photon-counting CT materials science for carbon capture and solar cells, to climate science, including our work on Earth-2 important works by NVIDIA Research on trustworthy AI and AV safety From computational lithography for micro-chips, to make the smallest machines to AI at the Large Hadron Collider to explain the universe. The world’s most important companies are here from auto and transportation healthcare, manufacturing, financial services, retail, apparel, media and entertainment, telco and of course, the world’s leading AI companies. The purpose of GTC is to inspire the world on the art-of-the-possible of accelerating computing and to celebrate the achievements of the scientists and researchers that use it. I am a translator. Transforming text into creative discovery, movement into animation, and direction into action. I am a healer. Exploring the building blocks that make us unique modeling new threats before they happen and searching for the cures to keep them at bay. I am a visionary. Generating new medical miracles and giving us a new perspective on our sun to keep us safe here on earth. I am a navigator. Discovering a unique moment in a sea of content we’re announcing the next generation and the perfect setting for any story. I am a creator. Building 3D experiences from snapshots and adding new levels of reality to our virtual selves. I am a helper. Bringing brainstorms to life sharing the wisdom of a million programmers and turning ideas into virtual worlds. Build northern forest. I even helped write this script breathed life into the words and composed the melody. I am AI. Brought to life by NVIDIA, deep learning, and brilliant minds everywhere. NVIDIA invented accelerated computing to solve problems that normal computers can’t. Accelerated computing is not
DiGB5uAYKAg
in a sea of content we’re announcing the next generation and the perfect setting for any story. I am a creator. Building 3D experiences from snapshots and adding new levels of reality to our virtual selves. I am a helper. Bringing brainstorms to life sharing the wisdom of a million programmers and turning ideas into virtual worlds. Build northern forest. I even helped write this script breathed life into the words and composed the melody. I am AI. Brought to life by NVIDIA, deep learning, and brilliant minds everywhere. NVIDIA invented accelerated computing to solve problems that normal computers can’t. Accelerated computing is not easy it requires full-stack invention from chips, systems, networking, acceleration libraries, to refactoring the applications. Each optimized stack accelerates an application domain from graphics, imaging, particle or fluid dynamics quantum physics, to data processing and machine learning. Once accelerated, the application can enjoy incredible speed-up, as well as scale-up across many computers. The combination of speed-up and scale-up has enabled us to achieve a million-X for many applications over the past decade helping solve problems previously impossible. Though there are many examples, the most famous is deep learning. In 2012, Alex Kerchevsky, Ilya Suskever, and Geoff Hinton needed an insanely fast computer to train the AlexNet computer vision model. The researchers trained AlexNet with 14 million images on GeForce GTX 580 processing 262 quadrillion floating-point operations, and the trained model won the ImageNet challenge by a wide margin, and ignited the Big Bang of AI. A decade later, the transformer model was invented. And Ilya, now at OpenAI, trained the GPT-3 large language model to predict the next word. 323 sextillion floating-point operations were required to train GPT-3. One million times more floating-point operations than to train AlexNet. The result this time – ChatGPT, the AI heard around the world. A new computing platform has been invented. The iPhone moment of AI has started. Accelerated computing and AI have arrived. Acceleration libraries are at the core of accelerated computing. These libraries connect to applications which connect to the world’s industries, forming a network of networks. Three decades in the making, several thousand applications are now NVIDIA accelerated with libraries in almost every domain of science and industry. All NVIDIA GPUs are CUDA-compatible, providing a large install base and significant reach for developers
DiGB5uAYKAg
more floating-point operations than to train AlexNet. The result this time – ChatGPT, the AI heard around the world. A new computing platform has been invented. The iPhone moment of AI has started. Accelerated computing and AI have arrived. Acceleration libraries are at the core of accelerated computing. These libraries connect to applications which connect to the world’s industries, forming a network of networks. Three decades in the making, several thousand applications are now NVIDIA accelerated with libraries in almost every domain of science and industry. All NVIDIA GPUs are CUDA-compatible, providing a large install base and significant reach for developers. A wealth of accelerated applications attract end users, which creates a large market for cloud service providers and computer makers to serve. A large market affords billions in R&D to fuel its growth. NVIDIA has established the accelerated computing virtuous cycle. Of the 300 acceleration libraries and 400 AI models that span ray tracing and neural rendering physical, earth, and life sciences, quantum physics and chemistry, computer vision data processing, machine learning and AI, we updated 100 we updated 100 this year that increase performance and features for our entire installed base. Let me highlight some acceleration libraries that solve new challenges and open new markets. The auto and aerospace industries use CFD for turbulence and aerodynamics simulation. The electronics industry uses CFD for thermal management design. This is Cadence’s slide of their new CFD solver accelerated by CUDA. At equivalent system cost, NVIDIA A100 is 9X the throughput of CPU servers. Or at equivalent simulation throughput, NVIDIA is 9X lower cost or 17X less energy consumed. Ansys, Siemens, Cadence, and other leading CFD solvers are now CUDA-accelerated. Worldwide, industrial CAE uses nearly 100 billion CPU core hours yearly. Acceleration is the best way to reclaim power and achieve sustainability and Net Zero. NVIDIA is partnering with the global quantum computing research community. The NVIDIA Quantum platform consists of libraries and systems for researchers to advance quantum programming models, system architectures, and algorithms. cuQuantum is an acceleration library for quantum circuit simulations. IBM Qiskit, Google Cirq, Baidu Quantum Leaf, QMWare, QuEra, Xanadu Pennylane, Agnostiq, and AWS Bracket have integrated cuQuantum into their simulation frameworks. Open Quantum CUDA is our hybrid GPU-Quantum programming model.
DiGB5uAYKAg
is the best way to reclaim power and achieve sustainability and Net Zero. NVIDIA is partnering with the global quantum computing research community. The NVIDIA Quantum platform consists of libraries and systems for researchers to advance quantum programming models, system architectures, and algorithms. cuQuantum is an acceleration library for quantum circuit simulations. IBM Qiskit, Google Cirq, Baidu Quantum Leaf, QMWare, QuEra, Xanadu Pennylane, Agnostiq, and AWS Bracket have integrated cuQuantum into their simulation frameworks. Open Quantum CUDA is our hybrid GPU-Quantum programming model. IonQ, ORCA Computing, Atom, QuEra, Oxford Quantum Circuits, IQM, Pasqal, Quantum Brilliance, Quantinuum, Rigetti, Xanadu, and Anyon have integrated Open Quantum CUDA. Error correction on a large number of qubits is necessary to recover data from quantum noise and decoherence. Today, we are announcing a quantum control link, developed in partnership with Quantum Machines that connects NVIDIA GPUs to a quantum computer to do error correction at extremely high speeds. Though commercial quantum computers are still a decade or two away, we are delighted to support this large and vibrant research community with NVIDIA Quantum. Enterprises worldwide use Apache Spark to process data lakes and warehouses SQL queries, graph analytics, and recommender systems. Spark-RAPIDS is NVIDIA’s accelerated Apache Spark data processing engine. Data processing is the leading workload of the world’s $500B cloud computing spend. Spark-RAPIDS now accelerates major cloud data processing platforms, including GCP Dataproc Amazon EMR, Databricks, and Cloudera. Recommender systems use vector databases to store, index, search, and retrieve massive datasets of unstructured data. A new important use-case of vector databases is large language models to retrieve domain-specific or proprietary facts that can be queried during text generation. We are introducing a new library, RAFT, to accelerate indexing, loading the data and retrieving a batch of neighbors for a single query. We are bringing the acceleration of RAFT to Meta’s open-source FAISS AI Similarity Search, Milvus open-source vector DB used by over 1,000 organizations, and Redis with over 4B docker pulls. Vector databases will be essential for organizations building proprietary large language models. Twenty-two years ago, operations research scientists Li
DiGB5uAYKAg
case of vector databases is large language models to retrieve domain-specific or proprietary facts that can be queried during text generation. We are introducing a new library, RAFT, to accelerate indexing, loading the data and retrieving a batch of neighbors for a single query. We are bringing the acceleration of RAFT to Meta’s open-source FAISS AI Similarity Search, Milvus open-source vector DB used by over 1,000 organizations, and Redis with over 4B docker pulls. Vector databases will be essential for organizations building proprietary large language models. Twenty-two years ago, operations research scientists Li and Lim posted a series of challenging pickup and delivery problems. PDP shows up in manufacturing, transportation, retail and logistics, and even disaster relief. PDP is a generalization of the Traveling Salesperson Problem and is NP-hard meaning there is no efficient algorithm to find an exact solution. The solution time grows factorially as the problem size increases. Using an evolution algorithm and accelerated computing to analyze 30 billion moves per second NVIDIA cuOpt has broken the world record and discovered the best solution for Li&Lim’s challenge. AT&T routinely dispatches 30,000 technicians to service 13 million customers across 700 geographic zones. Today, running on CPUs, AT&T’s dispatch optimization takes overnight. AT&T wants to find a dispatch solution in real time that continuously optimizes for urgent customer needs and overall customer satisfaction, while adjusting for delays and new incidents that arise. With cuOpt, AT&T can find a solution 100X faster and update their dispatch in real time. AT&T has adopted a full suite of NVIDIA AI libraries. In addition to Spark-RAPIDS and cuOPT, they’re using Riva for conversational AI and Omniverse for digital avatars. AT&T is tapping into NVIDIA accelerated computing and AI for sustainability, cost savings, and new services. cuOpt can also optimize logistic services. 400 billion parcels are delivered to 377 billion stops each year. Deloitte, Capgemini, Softserve, Accenture, and Quantiphi are using NVIDIA cuOpt to help customers optimize operations. NVIDIA’s inference platform consists of three software SDKs. NVIDIA TensorRT is our inference runtime that optimizes for the target GPU. NVIDIA Triton is a multi-framework data center inference serving software supporting GPUs and CPUs. Microsoft Office and Teams
DiGB5uAYKAg
. AT&T is tapping into NVIDIA accelerated computing and AI for sustainability, cost savings, and new services. cuOpt can also optimize logistic services. 400 billion parcels are delivered to 377 billion stops each year. Deloitte, Capgemini, Softserve, Accenture, and Quantiphi are using NVIDIA cuOpt to help customers optimize operations. NVIDIA’s inference platform consists of three software SDKs. NVIDIA TensorRT is our inference runtime that optimizes for the target GPU. NVIDIA Triton is a multi-framework data center inference serving software supporting GPUs and CPUs. Microsoft Office and Teams, Amazon, American Express, and the U.S. Postal Service are among the 40,000 customers using TensorRT and Triton. Uber uses Triton to serve hundreds of thousands of ETA predictions per second. With over 60 million daily users, Roblox uses Triton to serve models for game recommendations build avatars, and moderate content and marketplace ads. We are releasing some great new features – model analyzer support for model ensembles, multiple concurrent model serving, and multi-GPU, multi-node inference for GPT-3 large language models. NVIDIA Triton Management Service is our new software that automates the scaling and orchestration of Triton inference instances across a data center. Triton Management Service will help you improve the throughput and cost efficiency of deploying your models. 50-80% of cloud video pipelines are processed on CPUs consuming power and cost and adding latency. CV-CUDA for computer vision, and VPF for video processing, are new cloud-scale acceleration libraries. CV-CUDA includes 30 computer vision operators for detection, segmentation, and classification. VPF is a python video encode and decode acceleration library. Tencent uses CV-CUDA and VPF to process 300,000 videos per day. Microsoft uses CV-CUDA and VPF to process visual search. Runway is a super cool company that uses CV-CUDA and VPF to process video for their cloud Generative AI video editing service. Already, 80% of internet traffic is video. User-generated video content is driving significant growth and consuming massive amounts of power. We should accelerate all video processing and reclaim the power. CV-CUDA and VPF are in early access. NVIDIA accelerated computing helped achieve a genomics milestone now doctors can draw blood and sequence a patient’s DNA in the
DiGB5uAYKAg
process 300,000 videos per day. Microsoft uses CV-CUDA and VPF to process visual search. Runway is a super cool company that uses CV-CUDA and VPF to process video for their cloud Generative AI video editing service. Already, 80% of internet traffic is video. User-generated video content is driving significant growth and consuming massive amounts of power. We should accelerate all video processing and reclaim the power. CV-CUDA and VPF are in early access. NVIDIA accelerated computing helped achieve a genomics milestone now doctors can draw blood and sequence a patient’s DNA in the same visit. In another milestone, NVIDIA-powered instruments reduced the cost of whole genome sequencing to just $100. Genomics is a critical tool in synthetic biology with applications ranging from drug discovery and agriculture to energy production. NVIDIA Parabricks is a suite of AI-accelerated libraries for end-to-end genomics analysis in the cloud or in-instrument. NVIDIA Parabricks is available in every public cloud and genomics platforms like Terra, DNAnexus, and FormBio. Today, we’re announcing Parabricks 4.1 and will run on NVIDIA-accelerated genomics instruments from PacBio, Oxford Nanopore, Ultima, Singular, BioNano, and Nanostring. The world’s $250B medical instruments market is being transformed. Medical instruments will be software-defined and AI powered. NVIDIA Holoscan is a software library for real-time sensor processing systems. Over 75 companies are developing medical instruments on Holoscan. Today, we are announcing Medtronic, the world leader in medical instruments, and NVIDIA are building their AI platform for software-defined medical devices. This partnership will create a common platform for Medtronic systems, ranging from surgical navigation to robotic-assisted surgery. Today, Medtronic announced that its next-generation GI Genius system, with AI for early detection of colon cancer is built on NVIDIA Holoscan and will ship around the end of this year. The chip industry is the foundation of nearly every industry. Chip manufacturing demands extreme precision, producing features 1,000 times smaller than a bacterium and on the order of a single gold atom or a strand of human DNA. Lithography, the process of creating patterns on a wafer, is the beginning of the chip manufacturing process and consists of two stages – photomask making and pattern projection. It is
DiGB5uAYKAg
robotic-assisted surgery. Today, Medtronic announced that its next-generation GI Genius system, with AI for early detection of colon cancer is built on NVIDIA Holoscan and will ship around the end of this year. The chip industry is the foundation of nearly every industry. Chip manufacturing demands extreme precision, producing features 1,000 times smaller than a bacterium and on the order of a single gold atom or a strand of human DNA. Lithography, the process of creating patterns on a wafer, is the beginning of the chip manufacturing process and consists of two stages – photomask making and pattern projection. It is fundamentally an imaging problem at the limits of physics. The photomask is like a stencil of a chip. Light is blocked or passed through the mask to the wafer to create the pattern. The light is produced by the ASML EUV extreme ultraviolet lithography system. Each system is more than a quarter-of-a-billion dollars. ASML EUV uses a radical way to create light. Laser pulses firing 50,000 times a second at a drop of tin, vaporizing it, creating a plasma that emits 13.5nm EUV light nearly X-ray. Multilayer mirrors guide the light to the mask. The multilayer reflectors in the mask reticle take advantage of interference patterns of the 13.5nm light to create finer features down to 3nm. Magic. The wafer is positioned within a quarter of a nanometer and aligned 20,000 times a second to adjust for any vibration. The step before lithography is equally miraculous. Computational lithography applies inverse physics algorithms to predict the patterns on the mask that will produce the final patterns on the wafer. In fact, the patterns on the mask do not resemble the final features at all. Computational lithography simulates Maxwell’s equations of the behavior of light passing through optics and interacting with photoresists. Computational lithography is the largest computation workload in chip design and manufacturing consuming tens of billions of CPU hours annually. Massive data centers run 24/7 to create reticles used in lithography systems. These data centers are part of the nearly $200 billion annual CAPEX invested by chip manufacturers. Computational lithography is growing fast as algorithm complexity increases enabling the industry to go to 2nm and beyond. NVIDIA today is announcing cuLitho, a library for computational lithography. cuLitho, a massive body of
DiGB5uAYKAg
equations of the behavior of light passing through optics and interacting with photoresists. Computational lithography is the largest computation workload in chip design and manufacturing consuming tens of billions of CPU hours annually. Massive data centers run 24/7 to create reticles used in lithography systems. These data centers are part of the nearly $200 billion annual CAPEX invested by chip manufacturers. Computational lithography is growing fast as algorithm complexity increases enabling the industry to go to 2nm and beyond. NVIDIA today is announcing cuLitho, a library for computational lithography. cuLitho, a massive body of work that has taken nearly four years, and with close collaborations with TSMC, ASML, and Synopsys, accelerates computational lithography by over 40X. There are 89 reticles for the NVIDIA H100. Running on CPUs, a single reticle currently takes two weeks to process. cuLitho, running on GPUs, can process a reticle in a single 8-hour shift. TSMC can reduce their 40,000 CPU servers used for computational lithography by accelerating with cuLitho on just 500 DGX H100 systems, reducing power from 35MW to just 5MW. With cuLitho, TSMC can reduce prototype cycle time, increase throughput and reduce the carbon footprint of their manufacturing, and prepare for 2nm and beyond. TSMC will be qualifying cuLitho for production starting in June. Every industry needs to accelerate every workload, so that we can reclaim power and do more with less. Over the past ten years, cloud computing has grown 20% annually into a massive $1T industry. Some 30 million CPU servers do the majority of the processing. There are challenges on the horizon. As Moore’s Law ends, increasing CPU performance comes with increased power. And the mandate to decrease carbon emissions is fundamentally at odds with the need to increase data centers. Cloud computing growth is power-limited. First and foremost, data centers must accelerate every workload. Acceleration will reclaim power. The energy saved can fuel new growth. Whatever is not accelerated will be processed on CPUs. The CPU design point for accelerated cloud datacenters differs fundamentally from the past. In AI and cloud services, accelerated computing offloads parallelizable workloads, and CPUs process other workloads, like web RPC and database queries. We designed the Grace CPU for an AI and cloud-first world, where AI workloads
DiGB5uAYKAg
And the mandate to decrease carbon emissions is fundamentally at odds with the need to increase data centers. Cloud computing growth is power-limited. First and foremost, data centers must accelerate every workload. Acceleration will reclaim power. The energy saved can fuel new growth. Whatever is not accelerated will be processed on CPUs. The CPU design point for accelerated cloud datacenters differs fundamentally from the past. In AI and cloud services, accelerated computing offloads parallelizable workloads, and CPUs process other workloads, like web RPC and database queries. We designed the Grace CPU for an AI and cloud-first world, where AI workloads are GPU-accelerated and Grace excels at single-threaded execution and memory processing. It’s not just about the CPU chip. Datacenter operators optimize for throughput and total cost of ownership of the entire datacenter. We designed Grace for high energy-efficiency at cloud datacenter scale. Grace comprises 72 Arm cores connected by a super high-speed on-chip scalable coherent fabric that delivers 3.2 TB/sec of cross-sectional bandwidth. Grace Superchip connects 144 cores between two CPU dies over a 900 GB/sec low-power chip-to-chip coherent interface. The memory system is LPDDR low-power memory, like used in cellphones, that we specially enhanced for use in datacenters. It delivers 1 TB/s, 2.5x the bandwidth of today’s systems at 1/8th the power. The entire 144-core Grace Superchip module with 1TB of memory is only 5x8 inches. It is so low power it can be air cooled. This is the computing module with passive cooling. Two Grace Superchip computers can fit in a single 1U air-cooled server. Grace’s performance and power efficiency are excellent for cloud and scientific computing applications. We tested Grace on a popular Google benchmark, which tests how quickly cloud microservices communicate and the Hi-Bench suite that tests Apache Spark memory-intensive data processing. These kinds of workloads are foundational for cloud datacenters. At microservices, Grace is 1.3X faster than the average of the newest generation x86 CPUs and 1.2X faster at data processing And that higher performance is achieved using only 60% of the power measured at the full server node. CSPs can outfit a power-limited data center with 1.7X more Grace servers, each delivering 25% higher
DiGB5uAYKAg
and scientific computing applications. We tested Grace on a popular Google benchmark, which tests how quickly cloud microservices communicate and the Hi-Bench suite that tests Apache Spark memory-intensive data processing. These kinds of workloads are foundational for cloud datacenters. At microservices, Grace is 1.3X faster than the average of the newest generation x86 CPUs and 1.2X faster at data processing And that higher performance is achieved using only 60% of the power measured at the full server node. CSPs can outfit a power-limited data center with 1.7X more Grace servers, each delivering 25% higher throughput. At iso-power, Grace gives CSPs 2X the growth opportunity. Grace is sampling. And Asus, Atos, Gigabyte, HPE, QCT, Supermicro, Wistron, and ZT are building systems now. In a modern software-defined data center, the operating system doing virtualization, network, storage, and security can consume nearly half of the datacenter’s CPU cores and associated power. Datacenters must accelerate every workload to reclaim power and free CPUs for revenue-generating workloads. NVIDIA BlueField offloads and accelerates the datacenter operating system and infrastructure software. Over two dozen ecosystem partners, including Check Point, Cisco, DDN, Dell EMC Juniper, Palo Alto Networks, Red Hat, and VMWare, use BlueField’s datacenter acceleration technology to run their software platforms more efficiently. BlueField-3 is in production and adopted by leading cloud service providers, Baidu, CoreWeave, JD.com, Microsoft Azure, Oracle OCI, and Tencent Games, to accelerate their clouds. NVIDIA accelerated computing starts with DGX the world’s AI supercomputer the engine behind the large language model breakthrough. I hand-delivered the world’s first DGX to OpenAI. Since then, half of the Fortune 100 companies have installed DGX AI supercomputers. DGX has become the essential instrument of AI. The GPU of DGX is eight H100 modules. H100 has a Transformer Engine designed to process models like the amazing ChatGPT, which stands for Generative Pre-trained Transformers. The eight H100 modules are NVLINK’d to each other across NVLINK switches to allow fully non-blocking transactions. The eight H100s work as one giant GPU. The computing
DiGB5uAYKAg
. I hand-delivered the world’s first DGX to OpenAI. Since then, half of the Fortune 100 companies have installed DGX AI supercomputers. DGX has become the essential instrument of AI. The GPU of DGX is eight H100 modules. H100 has a Transformer Engine designed to process models like the amazing ChatGPT, which stands for Generative Pre-trained Transformers. The eight H100 modules are NVLINK’d to each other across NVLINK switches to allow fully non-blocking transactions. The eight H100s work as one giant GPU. The computing fabric is one of the most vital systems of the AI supercomputer. 400 Gbps ultra-low latency NVIDIA Quantum InfiniBand with in-network processing connects hundreds and thousands of DGX nodes into an AI supercomputer. NVIDIA DGX H100 is the blueprint for customers building AI infrastructure worldwide. It is now in full production. I am thrilled that Microsoft announced Azure is opening private previews to their H100 AI supercomputer. Other systems and cloud services will soon come from Atos, AWS, Cirrascale, CoreWeave, Dell, Gigabyte, Google, HPE, Lambda Labs, Lenovo, Oracle, Quanta, and SuperMicro. The market for DGX AI supercomputers has grown significantly. Originally used as an AI research instrument, DGX AI supercomputers are expanding into operation running 24/7 to refine data and process AI. DGX supercomputers are modern AI factories. We are at the iPhone moment of AI. Start-ups are racing to build disruptive products and business models, while incumbents are looking to respond. Generative AI has triggered a sense of urgency in enterprises worldwide to develop AI strategies. Customers need to access NVIDIA AI easier and faster. We are announcing NVIDIA DGX Cloud through partnerships with Microsoft Azure, Google GCP, and Oracle OCI to bring NVIDIA DGX AI supercomputers to every company, instantly, from a browser. DGX Cloud is optimized to run NVIDIA AI Enterprise, the world’s leading acceleration library suite for end-to-end development and deployment of AI. DGX Cloud offers customers the best of NVIDIA AI and the best of the world’s leading cloud service providers. This partnership brings NVIDIA’s ecosystem to the CSPs, while amplifying NVIDIA’s scale and reach. This win-win partnership gives customers racing to engage Generative AI instant access to
DiGB5uAYKAg
Microsoft Azure, Google GCP, and Oracle OCI to bring NVIDIA DGX AI supercomputers to every company, instantly, from a browser. DGX Cloud is optimized to run NVIDIA AI Enterprise, the world’s leading acceleration library suite for end-to-end development and deployment of AI. DGX Cloud offers customers the best of NVIDIA AI and the best of the world’s leading cloud service providers. This partnership brings NVIDIA’s ecosystem to the CSPs, while amplifying NVIDIA’s scale and reach. This win-win partnership gives customers racing to engage Generative AI instant access to NVIDIA in global-scale clouds. We’re excited by the speed, scale, and reach of this cloud extension of our business model. Oracle Cloud Infrastructure, OCI, will be the first NVIDIA DGX Cloud. OCI has excellent performance. They have a two-tier computing fabric and management network. NVIDIA’s CX-7, with the industry’s best RDMA, is the computing fabric. And BlueField-3 will be the infrastructure processor for the management network. The combination is a state-of-the-art DGX AI supercomputer that can be offered as a multi-tenant cloud service. We have 50 early access enterprise customers, spanning consumer internet and software, healthcare media and entertainment, and financial services. ChatGPT, Stable Diffusion, DALL-E, and Midjourney have awakened the world to Generative AI. These applications’ ease-of-use and impressive capabilities attracted over a hundred million users in just a few months - ChatGPT is the fastest-growing application in history. No training is necessary. Just ask these models to do something. The prompts can be precise or ambiguous. If not clear, through conversation, ChatGPT learns your intentions. The generated text is beyond impressive. ChatGPT can compose memos and poems, paraphrase a research paper, solve math problems, highlight key points of a contract, and even code software programs. ChatGPT is a computer that not only runs software but writes software. Many breakthroughs led to Generative AI. Transformers learn context and meaning from the relationships and dependencies of data, in parallel and at large scale. This led to large language models that learn from so much data they can perform downstream tasks without explicit training. And diffusion models, inspired by physics, learn without supervision to generate images. In just over a
DiGB5uAYKAg
. The generated text is beyond impressive. ChatGPT can compose memos and poems, paraphrase a research paper, solve math problems, highlight key points of a contract, and even code software programs. ChatGPT is a computer that not only runs software but writes software. Many breakthroughs led to Generative AI. Transformers learn context and meaning from the relationships and dependencies of data, in parallel and at large scale. This led to large language models that learn from so much data they can perform downstream tasks without explicit training. And diffusion models, inspired by physics, learn without supervision to generate images. In just over a decade, we went from trying to recognize cats to generating realistic images of a cat in a space suit walking on the moon. Generative AI is a new kind of computer — one that we program in human language. This ability has profound implications. Everyone can direct a computer to solve problems. This was a domain only for computer programmers. Now everyone is a programmer. Generative AI is a new computing platform like PC, internet, mobile, and cloud. And like in previous computing eras, first-movers are creating new applications and founding new companies to capitalize on Generative AI’s ability to automate and co-create. Debuild lets users design and deploy web applications just by explaining what they want. Grammarly is a writing assistant that considers context. Tabnine helps developers write code. Omnekey generates customized ads and copy. Kore.ai is a virtual customer service agent. Jasper generates marketing material. Jasper has written nearly 5 billion words, reducing time to generate the first draft by 80%. Insilico uses AI to accelerate drug design. Absci is using AI to predict therapeutic antibodies. Generative AI will reinvent nearly every industry. Many companies can use one of the excellent Generative AI APIs coming to market. Some companies need to build custom models, with their proprietary data, that are experts in their domain. They need to set up usage guardrails and refine their models to align with their company’s safety, privacy, and security requirements. The industry needs a foundry, a TSMC, for custom large language models. Today, we announce the NVIDIA AI Foundations a cloud service for customers needing to build, refine, and operate custom LLMlarge language models and Generative AI trained with their proprietary data and for their domain-specific tasks. NVIDIA AI Foundations comprises Language, Visual, and Biology model-making services
DiGB5uAYKAg
Some companies need to build custom models, with their proprietary data, that are experts in their domain. They need to set up usage guardrails and refine their models to align with their company’s safety, privacy, and security requirements. The industry needs a foundry, a TSMC, for custom large language models. Today, we announce the NVIDIA AI Foundations a cloud service for customers needing to build, refine, and operate custom LLMlarge language models and Generative AI trained with their proprietary data and for their domain-specific tasks. NVIDIA AI Foundations comprises Language, Visual, and Biology model-making services. NVIDIA Nemo is for building custom language text-to-text generative models. Customers can bring their model or start with the Nemo pre-trained language models, ranging from GPT-8, GPT-43 and GPT-530 billion parameters. Throughout the entire process, NVIDIA AI experts will work with you, from creating your proprietary model to operations. Let’s take a look. Generative models, like NVIDIA’s 43B foundational model, learn by training on billions of sentences and trillions of words. As the model converges, it begins to understand the relationships between words and their underlying concepts captured in the weights in the embedding space of the model. Transformer models use a technique called self attention: a mechanism designed to learn dependencies and relationships within a sequence of words. The result is a model that provides the foundation for a ChatGPT-like experience. These generative models require expansive amounts of data deep AI expertise for data processing and distributed training and large scale compute to train, deploy and maintain at the pace of innovation. Enterprises can fast-track their generative AI adoption with NVIDIA NeMo service running on NVIDIA DGX Cloud. The quickest path is starting with one of NVIDIA’s state-of-the-art pre-trained foundation models. With the NeMo service, organizations can easily customize a model with p-tuning to teach it specialized skills like summarizing financial documents creating brand-specific content and composing emails with personalized writing styles. Connecting the model to a proprietary knowledge base ensures that responses are accurate, current and cited for their business. Next, they can provide guardrails by adding logic and monitoring inputs, outputs, toxicity, and bias thresholds so it operates within a specified domain and prevents undesired responses. After putting the model to work, it can continuously improve with reinforcement learning based on user interactions. And NeMo
DiGB5uAYKAg
the-art pre-trained foundation models. With the NeMo service, organizations can easily customize a model with p-tuning to teach it specialized skills like summarizing financial documents creating brand-specific content and composing emails with personalized writing styles. Connecting the model to a proprietary knowledge base ensures that responses are accurate, current and cited for their business. Next, they can provide guardrails by adding logic and monitoring inputs, outputs, toxicity, and bias thresholds so it operates within a specified domain and prevents undesired responses. After putting the model to work, it can continuously improve with reinforcement learning based on user interactions. And NeMo’s playground is available for rapid prototyping before moving to the cloud API for larger-scale evaluation and application integration. Sign up for the NVIDIA NeMo service today to codify your enterprise’s knowledge into a personalized AI model that you control. Picasso is a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content. Let’s take a look. Generative AI is transforming how visual content is created. But to realize its full potential, enterprises need massiveamounts of copyright-cleared data, AI experts, and an AI supercomputer. NVIDIA Picasso is a cloud service for building and deploying generative AI-powered image, video, and 3D applications. With it, enterprises, ISVs, and service providers can deploy their own models. We're working with premier partners to bring generative AI capabilities to every industry Organizations can also start with NVIDIA Edify models and train them on their data to create a product or service. These models generate images, videos, and 3D assets. To access generative AI models applications send an API call with text prompts and metadata to Picasso. Picasso uses the appropriate model running on NVIDIA DGX Cloud to send back the generated asset to the application. This can be a photorealistic image, a high-resolution video, or a detailed 3D geometry. Generated assets can be imported into editing tools or into NVIDIA Omniverse to build photorealistic virtual worlds, metaverse applications, and digital twin simulations. With NVIDIA Picasso services running on NVIDIA DGX Cloud you can streamline training, optimization, and inference needed to build custom generative AI applications. See how NVIDIA Picasso can bring transformative generative AI capabilities into your applications. We are delighted that Getty Images will use the Picasso service to build Edify-image and Edify-video generative
DiGB5uAYKAg
the application. This can be a photorealistic image, a high-resolution video, or a detailed 3D geometry. Generated assets can be imported into editing tools or into NVIDIA Omniverse to build photorealistic virtual worlds, metaverse applications, and digital twin simulations. With NVIDIA Picasso services running on NVIDIA DGX Cloud you can streamline training, optimization, and inference needed to build custom generative AI applications. See how NVIDIA Picasso can bring transformative generative AI capabilities into your applications. We are delighted that Getty Images will use the Picasso service to build Edify-image and Edify-video generative models trained on their rich library of responsibly licensed professional images and video assets. Enterprises will be able to create custom images and video with simple text or image prompts. Shutterstock is developing an Edify-3D generative model trained on their professional image, 3D, and video assets library. Shutterstock will help simplify the creation of 3D assets for creative production, digital twins and virtual collaboration, making these workflows faster and easier for enterprises to implement. And I’m thrilled to announce a significant expansion of our long-time partnership with Adobe to build a set of next-generation AI capabilities for the future of creativity integrating generative AI into the everyday workflows of marketers and creative professionals. The new Generative AI models will be optimized for image creation, video, 3D, and animation. To protect artists’ rights, Adobe is developing with a focus on commercial viability and proper content attribution powered by Adobe’s Content Authenticity Initiative. Our third language domain is biology. Drug discovery is a nearly $2T industry with $250B dedicated to R&D. NVIDIA’s Clara is a healthcare application framework for imaging instruments, genomics, and drug discovery. The industry is now jumping onto generative AI to discover disease targets design novel molecules or protein-based drugs, and predict the behavior of the medicines in the body. Insilico Medicine, Exscientia, Absci, and Evozyme, are among hundreds of new AI drug discovery start-ups. Several have discovered novel targets or drug candidates and have started human clinical trials. BioNeMo helps researchers create fine-tune, and serve custom models with their proprietary data. Let’s take a look. There are 3 key stages to drug discovery discovering the biology that causes disease designing new molecules - whether those are small-molecules, proteins or antibodies and finally screening how
DiGB5uAYKAg
disease targets design novel molecules or protein-based drugs, and predict the behavior of the medicines in the body. Insilico Medicine, Exscientia, Absci, and Evozyme, are among hundreds of new AI drug discovery start-ups. Several have discovered novel targets or drug candidates and have started human clinical trials. BioNeMo helps researchers create fine-tune, and serve custom models with their proprietary data. Let’s take a look. There are 3 key stages to drug discovery discovering the biology that causes disease designing new molecules - whether those are small-molecules, proteins or antibodies and finally screening how those molecules interact with each other. Today, Generative AI is transforming every step of the drug discovery process. NVIDIA BioNeMo Service provides state-of-the-art generative AI models for drug discovery. It’s available as a cloud service, providing instant and easy access to accelerated drug discovery workflows. BioNeMo includes models like AlphaFold, ESMFold and OpenFold for 3D protein structure prediction. ProtGPT for protein generation, ESM1 and ESM2 for protein property prediction MegaMolBART and MoFlow and for molecule generation and DiffDock for molecular docking. Drug discovery teams can use the models through BioNeMo’s web interface or cloud APIs. Here is an example of using NVIDIA BioNeMo for drug discovery virtual screening. Generative models can now read a proteins amino acid sequence and in seconds, accurately predict the structure of a target protein. They can also generate molecules with desirable ADME properties that optimize how a drug behaves in the body. Generative models can even predict the 3D interactions of a protein and molecule accelerating the discovery of optimal drug candidates. With NVIDIA DGX Cloud BioNeMo also provides on-demand super computing infrastructure to further optimize and train models, saving teams valuable time and money so they can focus on discovering life saving medicines. The new AI drug discovery pipelines are here. Sign up for access for NVIDIA BioNeMo Service. We will continue to work with the industry to include models into BioNemo that encompass the end-to-end workflow of drug discovery and virtual screening. Amgen, AstraZeneca, Insilico Medicine, Evozyne, Innophore, and Alchemab Therapeutics are early access users of BioNeMo. NVIDIA AI Foundations, a cloud service, a foundry, for building custom language models and Generative AI
DiGB5uAYKAg
saving teams valuable time and money so they can focus on discovering life saving medicines. The new AI drug discovery pipelines are here. Sign up for access for NVIDIA BioNeMo Service. We will continue to work with the industry to include models into BioNemo that encompass the end-to-end workflow of drug discovery and virtual screening. Amgen, AstraZeneca, Insilico Medicine, Evozyne, Innophore, and Alchemab Therapeutics are early access users of BioNeMo. NVIDIA AI Foundations, a cloud service, a foundry, for building custom language models and Generative AI. Since AlexNet a decade ago, deep learning has opened giant new markets — automated driving, robotics, smart speakers, and reinvented how we shop, consume news, and enjoy music. That’s just the tip of the iceberg. AI is at an inflection point as Generative AI has started a new wave of opportunities, driving a step-function increase in inference workloads. AI can now generate diverse data, spanning voice, text, images, video, and 3D graphics to proteins and chemicals. Designing a cloud data center to process Generative AI is a great challenge. On the one hand, a single type of accelerator is ideal, because it allows the datacenter to be elastic and handle the unpredictable peaks and valleys of traffic. On the other hand, no one accelerator can optimally process the diversity of algorithms, models, data types, and sizes. NVIDIA's One Architecture platform offers both acceleration and elasticity. Today, we are announcing our new inference platform - four configurations - one architecture - one software stack. Each configuration is optimized for a class of workloads. For AI video workloads, we have L4 optimized for video decoding and transcoding, video content moderation, and video call features like background replacement, relighting, making eye contact, transcription, and real-time language translation. Most cloud videos today are processed on CPUs. One 8-GPU L4 server will replace over a hundred dual-socket CPU servers for processing AI Video. Snap is a leading user of NVIDIA AI for computer vision and recommender systems. Snap will use L4 for AV1 video processing, generative AI, and augmented reality. Snapchat users upload hundreds of millions of videos every day. Google announced today NVIDIA L4 on GCP. NVIDIA and Google Cloud are working to deploy major workloads on L4. Let me highlight five. First,
DiGB5uAYKAg
relighting, making eye contact, transcription, and real-time language translation. Most cloud videos today are processed on CPUs. One 8-GPU L4 server will replace over a hundred dual-socket CPU servers for processing AI Video. Snap is a leading user of NVIDIA AI for computer vision and recommender systems. Snap will use L4 for AV1 video processing, generative AI, and augmented reality. Snapchat users upload hundreds of millions of videos every day. Google announced today NVIDIA L4 on GCP. NVIDIA and Google Cloud are working to deploy major workloads on L4. Let me highlight five. First, we’re accelerating inference for generative AI models for cloud services like Wombo and Descript. Second, we’re integrating Triton Inference Server with Google Kubernetes Engine and VertexAI. Third, we’re accelerating Google Dataproc with NVIDIA Spark-RAPIDS. Fourth, we’re accelerating AlphaFold, and UL2 and T5 large language models. And fifth, we are accelerating Google Cloud’s Immersive Stream that renders 3D and AR experiences. With this collaboration, Google GCP is a premiere NVIDIA AI cloud. We look forward to telling you even more about our collaboration very soon. For Omniverse, graphics rendering and generative AI like text-to-image and text-to-video, we are announcing L40. L40 is up to 10 times the performance of NVIDIA’s T4, the most popular cloud inference GPU. Runway is a pioneer in Generative AI. Their research team was a key creator of Stable Diffusion and its predecessor, Latent Diffusion. Runway is inventing generative AI models for creating and editing content. With over 30 AI Magic Tools, their service is revolutionizing the creative process, all from the cloud. Let's take a look. Runway is making amazing AI-powered video editing and image creation tools accessible to everyone. Powered by the latest generation of NVIDIA GPUs running locally or in the cloud, Runway makes it possible to remove an object from a video with just a few brush strokes. Or apply different styles to video using just an input image. Or change the background or the foreground of a video. What used to take hours using conventional tools can now be completed with professional broadcast quality results in just a few minutes. Runway does this by utilizing CV-CUDA, an open-source project that
DiGB5uAYKAg
, all from the cloud. Let's take a look. Runway is making amazing AI-powered video editing and image creation tools accessible to everyone. Powered by the latest generation of NVIDIA GPUs running locally or in the cloud, Runway makes it possible to remove an object from a video with just a few brush strokes. Or apply different styles to video using just an input image. Or change the background or the foreground of a video. What used to take hours using conventional tools can now be completed with professional broadcast quality results in just a few minutes. Runway does this by utilizing CV-CUDA, an open-source project that enables developers to build highly efficient GPU-accelerated pre- and post-processing pipelines for computer vision workloads and scale them into the cloud. With NVIDIA technology, Runway is able to make impossible things to give the best experience to content creators. What previously limited pros can now be done by you. In fact, Runway is used in Oscar-nominated Hollywood films and we are placing this technology in the hands of the world's creators. Large language models like ChatGPT are a significant new inference workload. GPT models are memory and computationally intensive. Furthermore, inference is a high-volume, scale-out workload and requires standard commodity servers. For large language model inference, like ChatGPT, we are announcing a new Hopper GPU — the PCIE H100 with dual-GPU NVLINK. The new H100 has 94GB of HBM3 memory. H100 can process the 175-billion-parameter GPT-3 and supporting commodity PCIE servers make it easy to scale out. The only GPU in the cloud today that can practically process ChatGPT is HGX A100. Compared to HGX A100 for GPT-3 processing, a standard server with four pairs of H100 with dual-GPU NVLINK is up to 10X faster. H100 can reduce large language model processing costs by an order of magnitude. Grace Hopper is our new superchip that connects Grace CPU and Hopper GPU over a high-speed 900 GB/sec coherent chip-to-chip interface. Grace Hopper is ideal for processing giant data sets like AI databases for recommender systems and large language models. Today, CPUs, with large memory, store and query giant embedding tables then transfer results to GPUs for inference. With Grace-Hopper, Grace queries the embedding tables and transfers the results directly to Hopper across the high
DiGB5uAYKAg
NVLINK is up to 10X faster. H100 can reduce large language model processing costs by an order of magnitude. Grace Hopper is our new superchip that connects Grace CPU and Hopper GPU over a high-speed 900 GB/sec coherent chip-to-chip interface. Grace Hopper is ideal for processing giant data sets like AI databases for recommender systems and large language models. Today, CPUs, with large memory, store and query giant embedding tables then transfer results to GPUs for inference. With Grace-Hopper, Grace queries the embedding tables and transfers the results directly to Hopper across the high-speed interface – 7 times faster than PCIE. Customers want to build AI databases several orders of magnitude larger. Grace-Hopper is the ideal engine. This is NVIDIA's inference platform – one architecture for diverse AI workloads, and maximum datacenter acceleration and elasticity. The world’s largest industries make physical things, but they want to build them digitally. Omniverse is a platform for industrial digitalization that bridges digital and physical. It lets industries design, build, operate, and optimize physical products and factories digitally, before making a physical replica. Digitalization boosts efficiency and speed and saves money. One use of Omniverse is the virtual bring-up of a factory, where all of its machinery is integrated digitally before the real factory is built. This reduces last-minute surprises, change orders, and plant opening delays. Virtual factory integration can save billions for the world’s factories. The semiconductor industry is investing half a trillion dollars to build a record 84 new fabs. By 2030, auto manufacturers will build 300 factories to make 200 million electric vehicles. And battery makers are building 100 more mega factories. Digitalization is also transforming logistics, moving goods through billions of square feet of warehouses worldwide. Let’s look at how Amazon uses Omniverse to automate, optimize, and plan its autonomous warehouses. Amazon Robotics has manufactured and deployed the largest fleet of mobile industrial robots in the world. The newest member of this robotic fleet is Proteus, Amazon's first fully autonomous warehouse robot. Proteus is built to move through our facilities using advanced safety, perception, and navigation technology. Let's see how NVIDIA Isaac Sim, built on Omniverse is creating physically accurate, photoreal simulations to help accelerate Proteus deployments. Proteus features multiple sensors that include cameras, lidars, and ultrasonic sensors to power it
DiGB5uAYKAg
’s look at how Amazon uses Omniverse to automate, optimize, and plan its autonomous warehouses. Amazon Robotics has manufactured and deployed the largest fleet of mobile industrial robots in the world. The newest member of this robotic fleet is Proteus, Amazon's first fully autonomous warehouse robot. Proteus is built to move through our facilities using advanced safety, perception, and navigation technology. Let's see how NVIDIA Isaac Sim, built on Omniverse is creating physically accurate, photoreal simulations to help accelerate Proteus deployments. Proteus features multiple sensors that include cameras, lidars, and ultrasonic sensors to power it’s autonomy software systems. The Proteus team needed to improve the performance of a neural network that read fiducial markers and helped the robot determine its location on the map. It takes lots of data—and the right kind—to train the ML models that are driven by the robot sensor input. With Omniverse Replicator in Isaac Sim, Amazon Robotics was able to generate large photoreal synthetic datasets that improved the marker detection success rate from 88.6% to 98%. The use of the synthetic data generated by Omniverse Replicator also sped up development times, from months to days, as we were able to iteratively test and train our models much faster than when only using real data. To enable new autonomous capabilities for the expanding fleet of Proteus robots, Amazon Robotics is working towards closing the gap from simulation to reality, building large scale multi-sensor, multi-robot simulations. With Omniverse, Amazon Robotics will optimize operations with full fidelity warehouse digital twins. Whether we're generating synthetic data or developing new levels of autonomy, Isaac Sim on Omniverse helps the Amazon Robotics team save time and money as we deploy Proteus across our facilities. Omniverse has unique technologies for digitalization. And Omniverse is the premier development platform for USD, which serves as a common language that lets teams collaborate to create virtual worlds and digital twins. Omniverse is physically based, mirroring the laws of physics. It can connect to robotic systems and operate with hardware-in-the-loop. It features Generative AI to accelerate the creation of virtual worlds. And Omniverse can manage data sets of enormous scale. We've made significant updates to Omniverse in every area. Let’s take a look. Nearly 300,000 creators and designers have downloaded Omniverse. Omniverse is not a tool, but a USD network and shared database, a fabric connecting to design
DiGB5uAYKAg
for USD, which serves as a common language that lets teams collaborate to create virtual worlds and digital twins. Omniverse is physically based, mirroring the laws of physics. It can connect to robotic systems and operate with hardware-in-the-loop. It features Generative AI to accelerate the creation of virtual worlds. And Omniverse can manage data sets of enormous scale. We've made significant updates to Omniverse in every area. Let’s take a look. Nearly 300,000 creators and designers have downloaded Omniverse. Omniverse is not a tool, but a USD network and shared database, a fabric connecting to design tools used across industries. It connects, composes, and simulates the assets created by industry-leading tools. We are delighted to see the growth of Omniverse connections. Each connection links the ecosystem of one platform to the ecosystems of all the others. Omniverse’s network of networks is growing exponentially. Bentley Systems LumenRT is now connected. So are Siemens Teamcenter, NX, and Process Simulate, Rockwell Automation Emulate 3D, Cesium, Unity, and many more. Let’s look at the digitalization of the $3T auto industry and see how car companies are evaluating Omniverse in their workflows. Volvo Cars and GM use Omniverse USD Composer to connect and unify their asset pipelines. GM connects designers, sculptors, and artists using Alias, Siemens NX, Unreal, Maya, 3ds Max, and virtually assembles the components into a digital twin of the car. In engineering and simulation, they visualize the power flow aerodynamics in Omniverse. For next-generation Mercedes-Benz and Jaguar Land Rover vehicles, engineers use Drive Sim in Omniverse to generate synthetic data to train AI models, validate the active-safety system against a virtual NCAP driving test, and simulate real driving scenarios. Omniverse’s generative AI reconstructs previously driven routes into 3D so past experiences can be reenacted or modified. Working with Idealworks, BMW uses Isaac Sim in Omniverse to generate synthetic data and scenarios to train factory robots. Lotus is using Omniverse to virtually assemble welding stations. Toyota is using Omniverse to build digital twins of their plants. Mercedes-Benz uses Omniverse to build, optimize, and plan assembly lines for new models. Rimac and Lucid Motors use Omniverse to build digital stores from actual design data that faithfully represent their cars. BMW
DiGB5uAYKAg
, and simulate real driving scenarios. Omniverse’s generative AI reconstructs previously driven routes into 3D so past experiences can be reenacted or modified. Working with Idealworks, BMW uses Isaac Sim in Omniverse to generate synthetic data and scenarios to train factory robots. Lotus is using Omniverse to virtually assemble welding stations. Toyota is using Omniverse to build digital twins of their plants. Mercedes-Benz uses Omniverse to build, optimize, and plan assembly lines for new models. Rimac and Lucid Motors use Omniverse to build digital stores from actual design data that faithfully represent their cars. BMW is using Omniverse to plan operations across nearly three dozen factories worldwide. And they are building a new EV factory, completely in Omniverse, two years before the physical plant opens. Let's visit. The world’s industries are accelerating digitalization with over $3.4 trillion being invested in the next three years. We at BMW strive to be leading edge in automotive digitalization. With NVIDIA Omniverse and AI we set up new factories faster and produce more efficiently than ever. This results in significant savings for us. It all starts with planning – a complex process in which we need to connect many tools, datasets and specialists around the world. Traditionally, we are limited, since data is managed separately in a variety of systems and tools. Today, we’ve changed all that. We are developing custom Omniverse applications to connect our existing tools, know-how and teams all in a unified view. Omniverse is cloud-native and cloud-agnostic enabling teams to collaborate across our virtual factories from everywhere. I’m about to join a virtual planning session for Debrecen in Hungary – our new EV factory – opening in 2025. Letʼs jump in. Planner 1: Ah, Milan is joining. Milan: Hello, everyone! Planner 1:Hi Milan – great to see you, we’re in the middle of an optimization loop for our body shop. Would you like to see? Milan: Thanks – I’m highly interested. And I’d like to invite a friend. Planner 1: Sure. Jensen: Hey Milan! Good to see you. Milan: Jensen, welcome to our virtual planning session. Jensen: Its great to be here. What are we looking at? Milan: This is our global planning team who are working on a robot cell in Debrecen’s digital twin. Matthias,
DiGB5uAYKAg
everyone! Planner 1:Hi Milan – great to see you, we’re in the middle of an optimization loop for our body shop. Would you like to see? Milan: Thanks – I’m highly interested. And I’d like to invite a friend. Planner 1: Sure. Jensen: Hey Milan! Good to see you. Milan: Jensen, welcome to our virtual planning session. Jensen: Its great to be here. What are we looking at? Milan: This is our global planning team who are working on a robot cell in Debrecen’s digital twin. Matthias, tell us what’s happening … Matthias: So, we just learned the production concept requires some changes. We’re now reconfiguring the layout to add a new robot into the cell. Planner 2: Ok, but if we add a new robot, on the logistics side, we’ll need to move our storage container. Planner 3: Alright, let's get this new robot in. Matthias: That’s perfect. But let’s double-check - can we run the cell? Excellent. Jensen: Milan, this is just incredible! Virtual factory integration is essential for every industry. I’m so proud to see what our teams did together. Congratulations! Milan: We are working globally to optimize locally. After planning, operations is king, and we’ve already started! To celebrate the launch of our virtual plant, I’d like to invite you to open the first digital factory with me. Jensen: I’d be honored. Let’s do it! Car companies employ nearly 14 million people. Digitalization will enhance the industry's efficiency, productivity, and speed. Omniverse is the digital-to-physical operating system to realize industrial digitalization. Today we are announcing three systems designed to run Omniverse. First, we’re launching a new generation of workstations powered by NVIDIA Ada RTX GPUs and Intel's newest CPUs. The new workstations are ideal for doing ray tracing, physics simulation, neural graphics, and generative AI. They will be available from Boxx, Dell, HP, and Lenovo starting in March. Second, new NVIDIA OVX servers optimized for Omniverse. OVX consists of L40 Ada RTX server GPUs and our new BlueField-3. OVX servers will be available from Dell, HPE
DiGB5uAYKAg
Today we are announcing three systems designed to run Omniverse. First, we’re launching a new generation of workstations powered by NVIDIA Ada RTX GPUs and Intel's newest CPUs. The new workstations are ideal for doing ray tracing, physics simulation, neural graphics, and generative AI. They will be available from Boxx, Dell, HP, and Lenovo starting in March. Second, new NVIDIA OVX servers optimized for Omniverse. OVX consists of L40 Ada RTX server GPUs and our new BlueField-3. OVX servers will be available from Dell, HPE, Quanta, Gigabyte, Lenovo, and Supermicro. Each layer of the Omniverse stack, including the chips, systems, networking, and software are new inventions. Building and operating the Omniverse computer requires a sophisticated IT team. We’re going to make Omniverse fast and easy to scale and engage. Let’s take a look. The world’s largest industries are racing to digitalize their physical processes. Today, that’s a complex undertaking. NVIDIA Omniverse Cloud is a platform-as-a-service that provides instant, secure access to managed Omniverse Cloud APIs, workflows, and customizable applications running on NVIDIA OVX. Enterprise teams access the suite of managed services through the web browser Omniverse Launcher or via a custom-built integration. Once in Omniverse Cloud, enterprise teams can instantly access, extend, and publish foundation applications and workflows - to assemble and compose virtual worlds - generate data to train perception AIs - test and validate autonomous vehicles - or simulate autonomous robots… …accessing and publishing shared data to Omniverse Nucleus. Designers and engineers working in their favorite 3rd party design tools on RTX workstations, publish edits to Nucleus in parallel. Then when ready to iterate or view their integrated model in Omniverse, can simply open a web browser and log in. As projects and teams scale, Omniverse Cloud helps optimize cost by provisioning compute resources and licenses as needed. And new services and upgrades are automatically provided with real time updates. With Omniverse Cloud, enterprises can fast-track unified digitalization and collaboration across major industrial workflows, increasing efficiency, reducing costs and waste, and accelerating the path to innovation. See you in Omniverse! Today, we announce the NVIDIA Omniverse Cloud, a fully managed cloud service. We’re
DiGB5uAYKAg
Then when ready to iterate or view their integrated model in Omniverse, can simply open a web browser and log in. As projects and teams scale, Omniverse Cloud helps optimize cost by provisioning compute resources and licenses as needed. And new services and upgrades are automatically provided with real time updates. With Omniverse Cloud, enterprises can fast-track unified digitalization and collaboration across major industrial workflows, increasing efficiency, reducing costs and waste, and accelerating the path to innovation. See you in Omniverse! Today, we announce the NVIDIA Omniverse Cloud, a fully managed cloud service. We’re partnering with Microsoft to bring Omniverse Cloud to the world’s industries. We will host it in Azure, benefiting from Microsoft’s rich storage, security, applications, and services portfolio. We are connecting Omniverse Cloud to Microsoft 365 productivity suite, including Teams, OneDrive, SharePoint, and the Azure IoT Digital Twins services. Microsoft and NVIDIA are bringing Omniverse to hundreds of millions of Microsoft 365 and Azure users. Accelerated computing and AI have arrived. Developers use NVIDIA to speed-up and scale-up to solve problems previously impossible. A daunting challenge is Net Zero. Every company must accelerate every workload to reclaim power. Accelerated computing is a full-stack, datacenter-scale computing challenge. Grace, Grace-Hopper, and BlueField-3 are new chips for super energy-efficient accelerated data centers. Acceleration libraries solve new challenges and open new markets. We updated 100 acceleration libraries, including cuQuantum for quantum computing, cuOpt for combinatorial optimization, and cuLitho for computational lithography. We are thrilled to partner with TSMC, ASML, and Synopsys to go to 2nm and beyond. NVIDIA DGX AI Supercomputer is the engine behind the generative large language model breakthrough. The DGX H100 AI Supercomputer is in production and available soon from an expanding network of OEM and cloud partners worldwide. The DGX supercomputer is going beyond research and becoming a modern AI factory. Every company will manufacture intelligence. We are extending our business model with NVIDIA DGX Cloud by partnering with Microsoft Azure, Google GCP, and Oracle OCI to instantly bring NVIDIA AI to every company, from a browser. DGX Cloud offers customers the best of NVIDIA and the best of the world’s leading CSPs. We are at the iPhone moment for AI. Generative AI inference workload
DiGB5uAYKAg
model breakthrough. The DGX H100 AI Supercomputer is in production and available soon from an expanding network of OEM and cloud partners worldwide. The DGX supercomputer is going beyond research and becoming a modern AI factory. Every company will manufacture intelligence. We are extending our business model with NVIDIA DGX Cloud by partnering with Microsoft Azure, Google GCP, and Oracle OCI to instantly bring NVIDIA AI to every company, from a browser. DGX Cloud offers customers the best of NVIDIA and the best of the world’s leading CSPs. We are at the iPhone moment for AI. Generative AI inference workloads have gone into overdrive. We launched our new inference platform - four configurations - one architecture. L4 for AI video. L40 for Omniverse and graphics rendering. H100 PCIE for scaling out large language model inference. Grace-Hopper for recommender systems and vector databases. NVIDIA’s inference platform enables maximum data center acceleration and elasticity. NVIDIA and Google Cloud are working together to deploy a broad range of inference workloads. With this collaboration, Google GCP is a premiere NVIDIA AI cloud. NVIDIA AI Foundations is a cloud service, a foundry, for building custom language models and Generative AI. NVIDIA AI Foundations comprises language, visual, and biology model-making services. Getty Images and Shutterstock are building custom visual language models. And we're partnering with Adobe to build a set of next-generation AI capabilities for the future of creativity. Omniverse is the digital-to-physical operating system to realize industrial digitalization. Omniverse can unify the end-to-end workflow and digitalize the $3T, 14 million-employee automotive industry. Omniverse is leaping to the cloud. Hosted in Azure, we partner with Microsoft to bring Omniverse Cloud to the world’s industries. I thank our systems, cloud, and software partners, researchers, scientists, and especially our amazing employees for building the NVIDIA accelerated computing ecosystem. Together, we are helping the world do the impossible. Have a great GTC!
cEg8cOx7UZk
welcome back everyone after the short break I know that many of you are looking forward to hearing from our next speaker Jensen Wong Jensen is at The Cutting Edge of artificial intelligence and all of the innovation technology and human capital that is needed to support it my good friend and Seer colleague John Chauvin is going to introduce Jensen and I hope he's here somewhere so I'm just going to keep talking and then the two of them will have a conversation before taking some of your questions John chovin certainly requires very little introduction to many most In This Crowd as my predecessor as the Tron director of seer John is the one who started the SE economic Summit 20 years ago so I would just like right now for all of us to give John chovin a huge round of applause and appreciate the community that he had the foresight to build uh for those of you who haven't been touched by John's research his mentorship or his friendship here's what here's just a snippet of what you might like to know about him along with being the former Seer director and a Seer Senor senior fellow Meritus John is the Charles R Schwab professor of Economics he is also a senior fellow at the Hoover institution and a research associate of the National Bureau of economic research he specializes in public finance and corporate finance and has published many articles over the years on social security health economics corporate personal taxation mutual funds pension plans economic demography applied General equilibrium economics and much more uh John isn't one for long introductions but I just will say that if I can be on10th as helpful to my successor as John uh has been to me I'll feel like I've uh succeeded so I will let you read more about his Publications and accomplishments in the programs you've received uh today and so please join me in welcoming our good friend John Chauvin and I'm really looking forward to this thanks wow thank you so I have always thought that the more famous the speaker the shorter the appropriate introduction and if I was to follow that rule I would stop right now and say Jensen Wong but I'm not going to do that um so the Oxford English Dictionary defines the American dream believe it or not it does that and it says that it's a situation where everybody has an equal opportunity for Success Through hard work dedication and initiative and I would like to say that Jensen Wong is an example of the American dream Jensen uh was born in Taiwan came to the US at age nine with his brother not with his parents went to a rough tough School in Kentucky survived that his parents came two years later he
cEg8cOx7UZk
that the more famous the speaker the shorter the appropriate introduction and if I was to follow that rule I would stop right now and say Jensen Wong but I'm not going to do that um so the Oxford English Dictionary defines the American dream believe it or not it does that and it says that it's a situation where everybody has an equal opportunity for Success Through hard work dedication and initiative and I would like to say that Jensen Wong is an example of the American dream Jensen uh was born in Taiwan came to the US at age nine with his brother not with his parents went to a rough tough School in Kentucky survived that his parents came two years later he moved to Oregon skipped two grades and graduated from high school and went to Oregon state electrical engineering major 150 men and two women he said he was 16 he looked like he was 12 he had no chance with the women well he sort of liked one of them and said why don't we work on homework together did that over and over and over again six months later he after out for a date well he's still married to her so another American Dream now to skip to age 30 he co-founds Nvidia he's the only CEO there's ever been of Nvidia it's had its ups and its down more UPS than Downs it's now the fourth largest company in the world third largest American uh company so that sounds to me like the American dream um I should add that he also got a degree from Stanford master's degree I think he did it mostly at night uh and he was always good with homework at worked with his wife at worked with Stanford uh too um now of course we were here last week Nvidia announced its earnings in the finance crowd this got more attention than the Super Bowl that occurred a couple weeks earlier it was pretty uh amazing uh his company is at the absolute center of the most exciting develop vment I'd say of the 21st century technology development and uh so he's to be congratulated on that let me just say uh he's received a lot of awards a lot of recognition Enid has received a lot of awards a lot of recognition but I should have a short introduction so I'm about to quit I'm just going to talk about one award last month he was elected as a member of the National Academy of engineering this is a pretty prestigious award there are only three that I know of I actually asked chat GPT I didn't get an absolute clear answer how many CEOs of S&P 500 companies are members of the National Academy of engineering but I think it's three and two are in this room anaru Dev
cEg8cOx7UZk
be congratulated on that let me just say uh he's received a lot of awards a lot of recognition Enid has received a lot of awards a lot of recognition but I should have a short introduction so I'm about to quit I'm just going to talk about one award last month he was elected as a member of the National Academy of engineering this is a pretty prestigious award there are only three that I know of I actually asked chat GPT I didn't get an absolute clear answer how many CEOs of S&P 500 companies are members of the National Academy of engineering but I think it's three and two are in this room anaru Devan of Cadence Design Systems was awarded it last year so the two of them have that in common but let me now just conclude and congratulate Jensen not only on this award but on the amazing success of your company and thank you for speaking to us today at Seer Jensen how it thank you thank you you're here I'm here I guess so okay so why don't you start off with maybe some opening remarks and then I'll ask you a few questions and then then you get the tough questions well I think that after your opening remarks uh it is smartest for me not to make any opening remarks to to uh uh avoid risking uh damaging all the good things you said you know but but um let's see it's it's always good to have a pickup line um and mine was was uh do you want to see my homework and you're right we're married still we have two beautiful kids I have a perfect life uh two great puppies and um I love my job and and uh she still enjoys my homework well if you want I can ask you a few questions then yes please so if in my lifetime I thought the biggest technical development technology breakthrough was the transistor now I'm older than you yeah uh and it was pretty fundamental deal but should I rethink is AI now the biggest change in technology that has occurred in the last 76 years to to hint at my age yeah um well first first of all the the transistor was obviously a great invention but what was the greatest capability that enabled was software the ability for humans to express our ideas algorithms uh in a repeatable way computationally repeatable way uh was a was is the Breakthrough um what have we done we dedicated our company in the last 30 years 31 years uh to a new form of computing called accelerated Computing the idea is that general purpose Computing is not ideal for every every field of work and we said why don't we in invent a new way of doing computation such that
cEg8cOx7UZk
in technology that has occurred in the last 76 years to to hint at my age yeah um well first first of all the the transistor was obviously a great invention but what was the greatest capability that enabled was software the ability for humans to express our ideas algorithms uh in a repeatable way computationally repeatable way uh was a was is the Breakthrough um what have we done we dedicated our company in the last 30 years 31 years uh to a new form of computing called accelerated Computing the idea is that general purpose Computing is not ideal for every every field of work and we said why don't we in invent a new way of doing computation such that we can solve problems that general purpose Computing is ill equipped at at solving and and uh uh what we what we have effectively done in in a particular area of a domain of computation that is that's that is algorithmic in nature that can be paralyzed we've taken the computational cost of computers to approximately zero so what happens when you when you uh are able to take the marginal cost of something to approximately zero some we enabled a new way of doing software where it used to be written by humans we now can use computers to write the software because the computational cost is approximately zero and so you might as well uh let the computer go off and grind on just a massive amount of experience we call data digital experience human dig digital experience called data and grind on it to find the relationships and patterns that as a result represents human knowledge and that miracle happened about a decade and a half ago we saw it coming and and we took the whole company and we shaped our computer which was already which was already driving the marginal cost of computing down to zero and we pushed it into this whole domain and as a result in the last 10 years we reduced the cost of computing by 1 million times the cost of deep learning by 1 million times and a lot of people said said to me but Jensen if you if you reduce the cost of computing your your cost by a million times then people buy less of it and it's exactly the opposite we saw that if we could reduce the marginal cost of computing down to approximately zero we might use it to do something insanely amazing large language models to literally extract all of digital human knowledge from the internet and put it into to a computer and let it go figure out what the wisd what the knowledge is that idea of scraping the entire internet and putting it in one computer let the computer figure out what the program is is an insane concept but you wouldn't ever consider doing it unless the marginal cost of computing was zero and so so
cEg8cOx7UZk
the cost of computing your your cost by a million times then people buy less of it and it's exactly the opposite we saw that if we could reduce the marginal cost of computing down to approximately zero we might use it to do something insanely amazing large language models to literally extract all of digital human knowledge from the internet and put it into to a computer and let it go figure out what the wisd what the knowledge is that idea of scraping the entire internet and putting it in one computer let the computer figure out what the program is is an insane concept but you wouldn't ever consider doing it unless the marginal cost of computing was zero and so so we made we made that breakthrough and now we've enabled this new way of doing software imagine you know for for all the people that are still new to artificial intelligence we figured out how to use a computer to understand the meaning not the pattern but the meaning of almost all digital knowledge and everything you can digit anything you can digitize we can understand the meaning so let me give you an example Gene sequencing is digitizing genes but now with large language models we can go understand go un go learn the meaning of that Gene amino acids we digitized you know through Mass Spec we digitized um Pro amino acids now we can understand from the amino acid sequence without a whole lot of work with cryms and things like that we can go figure out what is the structure of the protein and what it does what is this meaning we can also do that on a fairly large scale pretty soon we can understand what's the meaning of a cell a whole bunch of genes that are connected together and this is from a computer's perspective no different than there's a a a whole page of words and you asked it to what is the meaning of it summarize what did it say summarize it for me what's the meaning this is no different than a hard you know big huge long page of genes what's the meaning of that big long page of proteins what's the meaning of that and so we're on the cusp of all this this is just this is the miracle of of what happened and so I would it's a longwinded answer of saying John that you're absolutely right that that that that AI which was enabled by this form this new form of computing we call Accelerated Computing that took three decades to do uh is probably the single greatest invention of the computer of the in of the technology industry this will likely be the most important thing of the 21st century I agree with that 21st century but maybe not the the 20th century which was the
cEg8cOx7UZk
that big long page of proteins what's the meaning of that and so we're on the cusp of all this this is just this is the miracle of of what happened and so I would it's a longwinded answer of saying John that you're absolutely right that that that that AI which was enabled by this form this new form of computing we call Accelerated Computing that took three decades to do uh is probably the single greatest invention of the computer of the in of the technology industry this will likely be the most important thing of the 21st century I agree with that 21st century but maybe not the the 20th century which was the transistor which it's got to be close we'll let history decide that's right we'll let history decide could you look ahead you I I I take it that the the GPU chip that is behind uh artificial intelligence right now is your h100 and I know you're introducing an h200 and I think I read that you plan to upgrade that each year and so could you think ahead five years March 2029 you're introducing the H700 right what will it allow us to do that we can't do now um I'll go backwards but but let me first say something about the chip that John just described um as we say a chip all of you in the audience probably because you've seen a chip before you you imagine there's a chip kind of like you know like this um the chip that John just described uh weighs 70 lbs it consists of 35,000 Parts eight of those parts came from tsmc it that one chip replaces um a data center of old CPUs like this into one computer the savings because we compute so fast the savings of that one computer is incredible and yet it's the most expensive computer the world's ever seen it's it's a quarter of a million dollar per chip we sell the world's first quar million dollar chip but the system that it replaced the cables alone cost more than the chip this h100 the cables of connecting all those old computers that's the that's the incredible thing that we did we reinvented Computing and as a result Computing marginal cost of computing went to zero that's what I just explained we took this entire data center We Shrunk it into this one chip well this one chip uh uh is really really great at trying to figure out um uh uh this form this form of computation that that without without without getting weird on you guys um call Deep learning it's really good at this thing called Ai and so so uh the way that this chip works it works not just at the
cEg8cOx7UZk
alone cost more than the chip this h100 the cables of connecting all those old computers that's the that's the incredible thing that we did we reinvented Computing and as a result Computing marginal cost of computing went to zero that's what I just explained we took this entire data center We Shrunk it into this one chip well this one chip uh uh is really really great at trying to figure out um uh uh this form this form of computation that that without without without getting weird on you guys um call Deep learning it's really good at this thing called Ai and so so uh the way that this chip works it works not just at the chip level but it works at the chip level and the algorithm level and the data center level it works together it can't it doesn't do all of its work by itself it works as a team and so you connect a whole bunch of these things together and it works at you know networking as part of it and so when you look at one of our computers it it's a it's a magnificent thing you know only only computer Engineers would think it's magnificent but it's magnificent okay um it weighs a lot miles and miles of cables hundreds of miles of cables and and the next one's soon coming is liquid cooled and you know it's beautiful in a lot of ways okay and and um uh and it computes at data center scales and together what's going to happen in the next 10 years say John um we'll increase the computational capability for M for deep learning by another million times and what happens when you do that what happens when you do that um today we we kind of learn and then we apply it we go train inference we learn and we apply it in the future we'll have continuous learning We could decide whether that whatever that continuous learning um result it will be uh uh deployed into you know the world's applications or not but the computer will will watch videos and and new text and uh from all the interactions that it's just continuously improving itself the learning process and the Train the the training process and the inference process the training process and the deployment process application process will just become one well that's exactly what we do you know we don't have like between now and o' in the morning I'm going to be doing my learning and then after that I'll just be doing inference you're learning and inferencing all the time and that reinforcement learning Loop will be continuous and that reinforcement learning will be grounded with real world data that is been um uh through interaction as well as synthetically generated data that we're creating in
cEg8cOx7UZk
and uh from all the interactions that it's just continuously improving itself the learning process and the Train the the training process and the inference process the training process and the deployment process application process will just become one well that's exactly what we do you know we don't have like between now and o' in the morning I'm going to be doing my learning and then after that I'll just be doing inference you're learning and inferencing all the time and that reinforcement learning Loop will be continuous and that reinforcement learning will be grounded with real world data that is been um uh through interaction as well as synthetically generated data that we're creating in real time so this computer will be imagining all the time does that make sense just like just as when you're learning you you take take pieces of information and you go from first principles it should work like this and then we we do the the simulation the imagination in our brain and that that future imaginate imag imagin state in a lot of ways manifests itself to us as reality and so your AI computer in the future will kind of do the same it'll do synthetic data generation it'll do reinforcement learning it'll continue to be grounded by real world experiences um it'll imagine some things it'll test it with real world experience I'll be grounded by that and that entire Loop is just one giant Loop that's what happens when you can compute for a million times cheaper than today and so as I as I'm saying this notice what's what's at the core of it when you can drive the marginal cost of computing down to zero then there are many new ways of doing something you're willing to do this is no different than I'm willing to go further places because the marginal cost of Transportation has gone to zero I can fly from here to New York relatively cheap cheaply if it would if it would have taken a month you know probably never go and so it's exactly the same in transportation and all just about everything that we do and so we're we're going to take the marginal cost of computing down to approximately zero as a result we'll do a lot more computation that causes me as you probably know there have been some recent stories that Nvidia will face more competition in the inference Market than it has in the training Market but what you're saying is it's actually going to be one market I think can you comment about um you know is there going to be a separate training chip market and inference chip Market or it sounds like you're going to be continuously uh training and switching to inference maybe within one chip I I don't I don
cEg8cOx7UZk
all just about everything that we do and so we're we're going to take the marginal cost of computing down to approximately zero as a result we'll do a lot more computation that causes me as you probably know there have been some recent stories that Nvidia will face more competition in the inference Market than it has in the training Market but what you're saying is it's actually going to be one market I think can you comment about um you know is there going to be a separate training chip market and inference chip Market or it sounds like you're going to be continuously uh training and switching to inference maybe within one chip I I don't I don't know why don't you explain more well today today whenever you uh prompt uh an AI it could be chat GPT or it could be co-pilot or it could be uh if you're using a surface nail platform you using mid Journey um using Firefly from Adobe whenever you're prompting it's doing inference you know inference is right so it's it's generating information for you whenever you do that what's behind it 100% of them is NVIDIA gpus and so Nvidia most of the time you engage our our our platforms are when you're inferencing and so we are 100% of the world's inferencing today is NVIDIA now is inferencing hard or Easy A lot of people the the reason why people are picking on inferences when you look at training and you look at Nvidia system doing training when you just look at it you go that looks too hard I'm not going to go do that I'm a chip company that doesn't look like a chip and so there's a natural and you have to in order for you to even prove that something works or not you're $2 billion doll into it yeah and you turn it on to realize it's not very effective you're $2 billion in two years into it the risk the risk of exploring something new is too high for the for the customers and and so a lot of a lot of competitors tend to say you know we're not into we're not into training we're into inference inference is incredibly hard let's think about it for a second the the the the response time of inference has to be really high but this is the this is the easy part that's the computer science part the the E the hard part of inference is the goal of somebody who's doing inference is to engage a lot more users to to apply that software to a large install base inference is an install base problem this is no different than somebody who's writing a an application
cEg8cOx7UZk
is too high for the for the customers and and so a lot of a lot of competitors tend to say you know we're not into we're not into training we're into inference inference is incredibly hard let's think about it for a second the the the the response time of inference has to be really high but this is the this is the easy part that's the computer science part the the E the hard part of inference is the goal of somebody who's doing inference is to engage a lot more users to to apply that software to a large install base inference is an install base problem this is no different than somebody who's writing a an application on on on an iPhone um the reason why they do so is because iPhone has such an large install base almost everyone has one and so if you wrote an application for that phone it's going to have the benefit of it it's going to be able to benefit everybody well in the case of Nvidia our accelerated Computing platform is the only accelerated Computing platform that's literally everywhere and because we we've been working on it for so long if you wrote an application for inference and you take that model and you Deploy on invidious architecture it literally runs everywhere and so you could touch everybody you can enable have greater impact and so the problem with inference is is actually install base and that takes enormous patience and years and years of success and dedication to architecture compatibility you know so on so forth you make completely State of-the-art chips is it possible though that you'll face competition that is claims to be good enough not as good as Nvidia but good enough and and much cheaper is that a is that a threat well first of all competition um we we have more competition than anyone on the planet has competition uh not only do we have competition from competitors we have competition from our customers and um and and I'm the only competitor to a customer um fully knowing they're about to design a chip to replace ours and I show them not only what my current chip is I show them what my next chip is and I'll show them what my chip after that is and so and the reason for that is because because look if you don't if you don't make an attempt at uh uh explaining why you're good at something they'll never get a chance to to buy your your products and so so we're we're completely open book in working with just about everybody in the industry um and and the reason the reason for that our our advantage is several our advantage what we're about is several things whereas you could build a chip to to be good at one
cEg8cOx7UZk
and I show them not only what my current chip is I show them what my next chip is and I'll show them what my chip after that is and so and the reason for that is because because look if you don't if you don't make an attempt at uh uh explaining why you're good at something they'll never get a chance to to buy your your products and so so we're we're completely open book in working with just about everybody in the industry um and and the reason the reason for that our our advantage is several our advantage what we're about is several things whereas you could build a chip to to be good at one particular algorithm remember Computing is more than even Transformers there's this idea called a Transformers there's a whole bunch of species of Transformers and their new Transformers being invented as we speak and the number of different types of software is really quite quite rich and the reason for that is because software Engineers love to create new things Innovation and we want that what Nvidia is good at is that our our architecture not only does it accelerate algorithms it's programmable meaning that that you can use it for SE we're the only accelerator for SQL SQL was came about in the 1960s IBM 1970s in storage Computing I mean sqls structured data is as important as it gets uh 300 zettabytes of data being created you know every couple of years Mo most of it is in sqls structured databases and so so we're we can accelerate that we can Accel accelerate quantum physics we can accelerate shortes equations we can accelerate just about you know every fluids particles um you know lots and lots of code and so what Nvidia is good at is the General field of accelerated Computing one of them is generative Ai and so for a data center that wants to have a lot of customers some of it in financial services some of it you know some of it in in manufacturing so on so forth in the world of computing we're you know we're we're a great standard we're in every single Cloud we're in every single computer company and so our company's architecture has become a standard if you will after some 30 somewhat years and and so that's that's really our advantage if a customer can can um do something specifically that's more cost effective quite frankly I'm even surprised by that and the reason for that is this remember artchip is only part think of when you see a when you see computers these days it's not a computer like a laptop it's a computer it's a Data Center and you have to operate it and so people who buy and sell chips
cEg8cOx7UZk
you know we're we're a great standard we're in every single Cloud we're in every single computer company and so our company's architecture has become a standard if you will after some 30 somewhat years and and so that's that's really our advantage if a customer can can um do something specifically that's more cost effective quite frankly I'm even surprised by that and the reason for that is this remember artchip is only part think of when you see a when you see computers these days it's not a computer like a laptop it's a computer it's a Data Center and you have to operate it and so people who buy and sell chips think about the price of chips people who operate data centers think about the cost of operations our time to deployment our performance performance our utilization our flexibility across all these different applications in total allows our operations cost they call total cost of operations TCO our TCO is so good that even when the competitor's chips are free it's not cheap enough and that that is our goal to add so much value that the alternative um is not about cost and and so so we of course of course that takes a lot of a lot of hard work and we have to keep innovating and things like that and we don't take anything for granted but we have a lot of competitors as you know but maybe not everybody in the audience knows there's this term artificial general intelligence which basically I was hoping not to sound competitive but John asked a question that kind of triggered a competitive Gene and I came AC I I want to say I want to apologize I came across you know if if you will a little competitive I apologize for that I could have probably done that more artfully I will next time but he surprised me with a competitive I I I I thought I was on an economic Forum you know just walking in here I asked him I'd sent some questions to his team and I said did you look at the questions he says no I didn't look at the questions cuz I wanted to be spontaneous besides I might start thinking about it and then uh that that would be bad so we're just kind of winging it here um both of us um so I was asking when when do you think and of course it when do you think we will achieve artificial general intelligence the sort of human level intelligence is that is that 50 years away is it five years away what's your opinion um I'll give you a very specific answer but but first let me let me just tell you a couple things about what's happening that's super exciting first uh of course of
cEg8cOx7UZk
questions he says no I didn't look at the questions cuz I wanted to be spontaneous besides I might start thinking about it and then uh that that would be bad so we're just kind of winging it here um both of us um so I was asking when when do you think and of course it when do you think we will achieve artificial general intelligence the sort of human level intelligence is that is that 50 years away is it five years away what's your opinion um I'll give you a very specific answer but but first let me let me just tell you a couple things about what's happening that's super exciting first uh of course of course um uh we're training these models to be multimodality meaning uh that we will learn from sounds we will learn from uh words we'll learn from uh vision and we'll just watch TV and learn uh so on so forth okay just like all of us and the reason why that's so important is because we want AI to be grounded grounded not just by human value use which is what chat GPT um really innovated I remember we had large language models before but if it wasn't until reinforcement learning human feedback that human feedback that grounds the AI to something that that we feel good about human values okay um and now could you imagine now you have to generate images and videos and things like that how does it the AI know that hands don't penetrate through you know podiums uh that feet stand above the ground that when you step on water you all fall into it so you have to ground it on physics and so so now ai has to learn um by watching a lot of different examples and ideally mostly video uh that certain be certain properties um uh are are obeyed in in in the world okay it has to create what is called a world model and so so one we have to we have to understand multimodality there's a whole bunch of other modalities like as I mentioned before genes and amino acids and proteins and cells which leads to organs and you know so on so forth and so we would like to uh multim modality second is um uh greater and greater reasoning capabilities a lot of a lot of the things that we already do uh reasoning skills are encoded in common sense you know Common Sense is reasoning that we all kind of take for granted and so there are a lot of things in our knowledge in the internet that already encodes reasoning and and and models can learn that um but there's higher level reasoning uh capabilities for example example there's some questions that you ask me right now when we're talking I'm
cEg8cOx7UZk
mentioned before genes and amino acids and proteins and cells which leads to organs and you know so on so forth and so we would like to uh multim modality second is um uh greater and greater reasoning capabilities a lot of a lot of the things that we already do uh reasoning skills are encoded in common sense you know Common Sense is reasoning that we all kind of take for granted and so there are a lot of things in our knowledge in the internet that already encodes reasoning and and and models can learn that um but there's higher level reasoning uh capabilities for example example there's some questions that you ask me right now when we're talking I'm mostly doing generative AI I'm not spending a whole lot of time reasoning about the question however there are certain problems like for example planning problems where I'm going to that's interesting let me think about that and I'm cycling it in the back and I'm coming up with the multiple plans I've got I'm traversing a tree maybe I'm going through my graph and you know I'm I'm I'm pruning my tree and saying this doesn't make sense but this I'm going to put and I simulate it in my head and maybe I do some calculations and so on so forth that long thinking that long thinking AI is not good at today everything that you prompt into chat gbt it responds instantaneously we would like to prompt something into chat gbt give it a mission statement give it a problem and for it to think a while isn't that right and so so that kind of system you know what computer science call system 2 thinking or long thinking or planning those kind of things reasoning reasoning and planning those kind of problems I think we're going to we're working on those things and I think that you're going to see some breakthroughs and so in the future the way you're interact with AI will be very different some of it will be just just give me a question question and answer some of it say here's a problem go work on it for a while okay tell me tomorrow and it it it does the the largest amount of computation it can do U by tomorrow you you could also say I'm going to give you this problem U you know spend $1,000 on it but don't spend more than more than that and it comes back with the best answer within the Thousand or you you know so on so forth okay so so that's now AGI the question on AGI is what's the definition yeah in fact that's kind of the Supreme question now if you ask me uh if you say
cEg8cOx7UZk
question question and answer some of it say here's a problem go work on it for a while okay tell me tomorrow and it it it does the the largest amount of computation it can do U by tomorrow you you could also say I'm going to give you this problem U you know spend $1,000 on it but don't spend more than more than that and it comes back with the best answer within the Thousand or you you know so on so forth okay so so that's now AGI the question on AGI is what's the definition yeah in fact that's kind of the Supreme question now if you ask me uh if you say Jensen uh AGI is a list of a list of tests and remember an engineer can only know an engineer knows that we've you know anybody in the in in that you know prestigious organization that I'm now powered of it knows for sure about engineers is that you need to have a specification and you need to know what the definition of successes you need to have a test now if I if I gave uh an AI a lot of math tests and reasoning tests and a history test and biology tests and medical exams and bar exams and you name it SATs and mcats and every single test that you can possibly imagine you make that list of tests and you put it in front of put it in front of the computer science Industry I'm guessing in 5 years time we'll do well on every single one of them and so if your definition of AG is that it passes human tests yep then I will tell you five years if you tell me but is it if you asked it to me a little bit differently the way you asked it that AGI is going to be have human intelligence well I'm not exactly sure how to specify all of your intelligence yet and nobody does really and therefore it's hard to achieve as an engineer does that make sense okay and so so the answer is we're not sure and and um uh but we're we're all endeavoring to make it you know better and better so I'm going to ask two more questions and I'm going to turn it over because I think there's lots of uh good questions out there the first one I was going to ask about is could you just dive a little deeper into what you see as ai's role in drug discovery the first role is to understand understand the meaning of the digital information that we have right now we have we have all as you know we have U uh we have a whole lot of amino acids we can now uh because of alpha fold um understand the
cEg8cOx7UZk
sure and and um uh but we're we're all endeavoring to make it you know better and better so I'm going to ask two more questions and I'm going to turn it over because I think there's lots of uh good questions out there the first one I was going to ask about is could you just dive a little deeper into what you see as ai's role in drug discovery the first role is to understand understand the meaning of the digital information that we have right now we have we have all as you know we have U uh we have a whole lot of amino acids we can now uh because of alpha fold um understand the protein structure in many of them but the question is now what is the meaning of that protein what is the meaning of this protein what is this function uh it would be great just as you can chat with GPT uh as you guys know uh there's you can chat with a PDF you take a PDF file doesn't matter what it is my favorites are you take a PDF file of a of a research paper and you load it into chat G and you start at just talking to it it's like talking to the researchers is you know just ask what what inspired this this research what problem does it solve you know what was the Breakthrough what what was the what was the state- of art before then what were the what were the novel ideas just talk to it like a human okay in the future want to take a protein put it into chat GPT just like PDF what are you for what what enzymes activate you you know what makes you happy for example there'll be a whole whole sequence of genes and you're going to take the and represents a cell you you going to put that cell in what are you for what do you do what are you good for you know what do you hopes and dreams and so so that that's that's one of the most profound things we can do is to understand the meaning of biology does that make sense if we can understand the meaning of biology as you guys know once we understand the meaning of almost any information that it's in the world the computer science in the world of computing amazing engineers and amazing scientists know exactly what to do with it but that's the Breakthrough the multiomic multi multi-omic um understanding of biology and so that's if I could you know deep and shallow answer to your I think that's probably the single most profound thing that we can do boy Oregon State and Stanford are really proud of you so if I could switch gears just a little bit and just say Stanford has
cEg8cOx7UZk
to understand the meaning of biology does that make sense if we can understand the meaning of biology as you guys know once we understand the meaning of almost any information that it's in the world the computer science in the world of computing amazing engineers and amazing scientists know exactly what to do with it but that's the Breakthrough the multiomic multi multi-omic um understanding of biology and so that's if I could you know deep and shallow answer to your I think that's probably the single most profound thing that we can do boy Oregon State and Stanford are really proud of you so if I could switch gears just a little bit and just say Stanford has a lot of aspiring entrepreneurs students that are entrepreneurs and maybe they're computer science Majors or or engineering majors of some sort please don't build gpus what what advice would you give them uh to improve their chances of success um you know one one of my one of I think one of my my great advantages is that I have very low expectations um and and and I mean that um most of most of the Stanford graduates have very high expectations you you and you deserve to have have expectations because you came from a great school um uh you were very successful you're on top of your top of your class uh obviously you were able to pay for tuition um and and uh uh and then you're graduating from one of the finest institutions on the planet you're surrounded by other kids that are just incredible you should have very you you naturally have very high expectations um people with very high expectations have very low resilience and unfortunately resilience matters in success I don't know how to teach it to you except for I hope suffering happens to you and and uh I I was fortunate that I grew up with a with a with you know with my parents um uh uh providing a condition for us to be successful on the one hand um but there were plenty of plenty of opportunities for setbacks and suffering and um you know and and to to this day I use the word the phrase pain and suffering inside our company with great Glee and the reason and I mean that you know boy this is going to cause a lot of pain and suffering and I mean that in a happy way um because because you want to train you want to refine the character of your company you want want that you want greatness out of them and greatness is not intelligence as you know greatness comes from character and character isn't isn't formed out of smart people it's formed out of people who suffered and and so so that's that's kind of and so if I could if I could wish upon you
cEg8cOx7UZk
and to to this day I use the word the phrase pain and suffering inside our company with great Glee and the reason and I mean that you know boy this is going to cause a lot of pain and suffering and I mean that in a happy way um because because you want to train you want to refine the character of your company you want want that you want greatness out of them and greatness is not intelligence as you know greatness comes from character and character isn't isn't formed out of smart people it's formed out of people who suffered and and so so that's that's kind of and so if I could if I could wish upon you I don't know how to do it but you know for all of you Stanford students I I wish upon you you know ample doses of pain and suffering I'm going to back out of my promise and ask you one more question how do you you seem incredibly motivated and energetic but how do you keep your employees motivated and energetic when they probably become richer than they ever expected to I'm surrounded I'm surrounded by 55 people my management team so you know my I I have a man my management team my director reports is 55 people um uh I write no reviews for any of them I give them constant reviews uh and they provide the same to me uh my compensation for them uh is the the bottom right corner of excel I just drag it down literally many of our executives are paid the same exactly to the dollar I know it's weird it works and and uh I don't do one-on ones with any of them unless they need me then I'll drop everything for them uh I never have meetings with them just alone and they never hear me say something to them uh that is only for them to know there's not one piece of information that I that I somehow secretly tell eaff that I don't tell the rest of the company um uh and so in in that in that way our company was designed for agility for information to be to flow as quickly as possible uh for people to be empowered by what they are able to do not what they know um and uh I and so that that's the architecture of our company um I don't remember your question but but oh oh oh oh oh oh oh I got it I got it I got it I got it uh and the the answer the answer for that is my behavior yeah the it's uh how do I celebrate success how do I celebrate failure how do I talk about success how do I talk about setbacks um every single thing that I'm looking for opportunities to instill every single day
cEg8cOx7UZk
was designed for agility for information to be to flow as quickly as possible uh for people to be empowered by what they are able to do not what they know um and uh I and so that that's the architecture of our company um I don't remember your question but but oh oh oh oh oh oh oh I got it I got it I got it I got it uh and the the answer the answer for that is my behavior yeah the it's uh how do I celebrate success how do I celebrate failure how do I talk about success how do I talk about setbacks um every single thing that I'm looking for opportunities to instill every single day I'm looking for opportunities to to keep on uh instilling the culture of the company and what is important what's not important what's the definition of good how do you compare yourself to good how do you think about good um uh how do you think about a journey how do you think about results uh all of that all day long Mark dougen can you help us okay good so let's open it up uh for some questions let me start with Winston and I'll come to you oh we need a microphone can you just Ben you got this yeah board member Winston I have a couple question what's a story about your leather jacket and the second the second is according to your projection and calculation in 5 to 10 years how much more semiconductor manufacturing capacity is needed to support the growth of AI okay uh I appreciate two questions um uh the the uh the first question is this is what my wife bought for me and this is what I'm wearing and and because I do I do 0% of my own shopping uh as soon as something doesn't as soon as she finds something that doesn't make me itch because she knows she's known me since I was 17 years old and she thinks that everything makes me itch and the way I say I don't like something is it makes me itch and so as soon as she finds me something that doesn't make me itch if you look at my closet the whole closet is a shirt because she doesn't want to shot for me again and so so that's why uh this is all she bought me and this is all I'm wearing and if I if I don't like the answer I can go shopping otherwise I could wear it and it's good enough for me we second question on this the forecast is actually very this is very I'm horrible at forecasting but I'm very good at first principled reasoning of the size of the opportunity and so let me first reason for you um uh I have
cEg8cOx7UZk
and so as soon as she finds me something that doesn't make me itch if you look at my closet the whole closet is a shirt because she doesn't want to shot for me again and so so that's why uh this is all she bought me and this is all I'm wearing and if I if I don't like the answer I can go shopping otherwise I could wear it and it's good enough for me we second question on this the forecast is actually very this is very I'm horrible at forecasting but I'm very good at first principled reasoning of the size of the opportunity and so let me first reason for you um uh I have no idea how many f ABS but here's here's the thing that I do know the way that we do Computing today the the the information was was written by someone created by someone it's basically pre-recorded all the words all the videos all the sound everything that we do is retrieval based it was pre-recorded does that make sense as I say that every time you touch on a phone remember somebody wrote that and stored it somewhere it was pre-recorded okay every modality that you know in the future because we're going to have AIS it understands the current circumstance and because it can it's tapped into all of the world's you know latest news and things like it's called retrieval based okay and it understand your context meaning it understood why you asked what you're asking about when you and I ask about the economy we probably are meeting very different things and for very different context and based on that it can generate at exactly the right information for you so in the future it already understands context and most of computing will be generative in the today 100% of content is pre-recorded if in the future 100% of content will be generative the question is how many how does that change the shape of computing and so without torturing you anymore um I'll that's how I reason through things how much more networking do we need more less of that do we need memory of this and and the answer is we're going to need more Fabs however uh remember that we're also improving the algorithms and the processing of it um tremendously over time it's not as if the efficiency of computing is what it is today and therefore the demand is this much in the meantime I'm improving Computing by a million times every 10 years while demand is going up by a trillion times and that has to offset each other does that make sense and then there's technology diffusion and so on so forth that's just a matter of time but it doesn
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
0
Edit dataset card